Next Article in Journal
The Jacobi Elliptic Function and Incomplete Elliptic Integral of Second Kind Solutions of the Wazwaz Negative Order Korteweg–de Vries Equation
Previous Article in Journal
Lie Symmetries, Solitary Waves, and Noether Conservation Laws for (2 + 1)-Dimensional Anisotropic Power-Law Nonlinear Wave Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Power Load Prediction and Dynamic Power Management of Trailing Suction Hopper Dredger

1
College of Automation, Jiangsu University of Science and Technology, Zhenjiang 212100, China
2
Jiangsu Shipbuilding and Ocean Engineering Design and Research Institute, Zhenjiang 212100, China
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(9), 1446; https://doi.org/10.3390/sym17091446
Submission received: 10 July 2025 / Revised: 9 August 2025 / Accepted: 25 August 2025 / Published: 4 September 2025
(This article belongs to the Section Engineering and Materials)

Abstract

During the continuous operation of trailing suction hopper dredger (TSHD), equipment workload exhibits significant time-varying characteristics. Maintaining dynamic symmetry between power generation and consumption is crucial for ensuring system stability and preventing power supply failures. Key challenges lie in dynamic perception, accurate prediction, and real-time power management to achieve this equilibrium. To address this issue, this paper proposes and constructs a “prediction-driven dynamic power management method.” Firstly, to model the complex temporal dependencies of the workload sequence, we introduce and improve a dilated convolutional long short-term memory network (Dilated-LSTM) to build a workload prediction model with strong long-term dependency awareness. This model significantly improves the accuracy of workload trend prediction. Based on the accurate prediction results, a dynamic power management strategy is developed: when the predicted total power consumption is about to exceed a preset margin threshold, the Power Management System (PMS) automatically triggers power reduction operations for adjusfigure loads, aiming to maintain grid balance without interrupting critical loads. If the power that the generator can produce is still less than the required power after the power is reduced, and there is still a risk of supply-demand imbalance, the system uses an Improved Grey Wolf Optimization (IGWO) algorithm to automatically disconnect some non-critical loads, achieving real-time dynamic symmetry matching of generation capacity and load demand. Experimental results show that this mechanism effectively prevents generator overloads or ship-wide power failures, significantly improving system stability and the reliability of power supply to critical loads. The research results provide effective technical support for intelligent energy efficiency management and safe operation of TSHDs and other vessels with complex working conditions.

1. Introduction

Against the backdrop of global fossil fuel shortages, the continued rise in fuel costs has become a growing portion of total shipping expenses [1]. As energy consumption is a significant operating cost for dredging enterprises, improving energy efficiency is essential to achieving economic performance and meeting environmental regulations. Trailing suction hopper dredgers (TSHDs), as the core equipment for waterway dredging and maintenance, have expanded their applications from traditional channel maintenance [2] to diversified scenarios, including port construction [3], ecological environment restoration, and climate adaptation projects. For instance, in the Yellow River’s “Secondary Suspended River” regulation project, TSHDs demonstrated outstanding wave resistance and low-disturbance dredging capabilities, which not only ensured the widening of flood discharge channels but also significantly reduced secondary pollution to aquatic ecosystems caused by suspended sediment dispersion [4]. Against this backdrop, improving the energy efficiency and environmental performance of TSHDs has become critically important as they play a pivotal role in waterway dredging and maintenance operations.
In practice, the current dynamic power management of TSHDs is primarily reactive: when real-time monitoring shows power consumption exceeding generator capacity, the system responds with load reduction operations or prioritized load shedding to maintain grid balance [5,6]. However, this approach has two major drawbacks: (1) Latency: Responses are triggered only after overloads occur, failing to prevent generator failures. (2) Conservatism: High safety margins are set to avoid frequent load shedding, resulting in underutilized generation capacity. The root cause is the lack of predictive support for operational loads, making the system reactive rather than proactive. Conducting short-term forecasting of ship power loads is crucial for energy efficiency optimization and optimal power allocation among power sources [7]. Therefore, forward-looking power management for TSHDs is of great significance.
TSHDs exhibit similar power load characteristics within each operational cycle [8]. Understanding these patterns is essential for accurate load forecasting and ensuring safe and stable operations. Early research explored probabilistic models and optimization control methods. Bochenski et al. applied data-driven methods to optimize TSHDs and developed a probabilistic engine selection method based on normal distribution fitting of engine load, achieving refined energy control and reduced waste [9]. Braaksma et al. focused on overall dredging efficiency and applied model predictive control with successful real-ship validation [10]. However, with growing accuracy requirements, traditional models and some classic machine learning approaches show limitations in handling complex temporal dependencies and nonlinearities. For instance, Vu et al. predicted hybrid tugboat loads under known conditions, but the results were lagging and inaccurate [11]. To improve precision, Farahat et al. combined curve fitting and time series models with genetic algorithms to optimize Gaussian model parameters, reducing prediction errors [12].
Recently, deep learning models have become mainstream in load forecasting due to their powerful nonlinear fitting and long-sequence dependency capabilities. Abumohsen et al. used LSTM, GRU, and RNN with good results [13]. To further enhance performance and robustness, researchers introduced attention mechanisms and advanced architectures. Deng et al. proposed an adaptive sparse attention network to enhance robustness in smart grids [14]. Liu et al. improved TCN with parallel pooling and self-attention for deep feature extraction [15]. Xin et al. combined LSTM-Transformer for improved load forecasting [16]. Guo et al. used a CNN-LSTM-Attention model on GPUs for high-accuracy load prediction [17]. Giacomazzi et al. explored time-fusion Transformers for hourly load prediction across different time ranges [18], showing great potential.
Beyond mainstream approaches, other studies target specific scenarios. Chen et al. proposed an anti-nonstationary algorithm for cloud CPU simulation [19], relevant for ship load variability. Xie et al. developed Zone-out LSTM to improve cloud host prediction [20]. Shu et al. combined a modified chi-square kernel and RBF with weight optimization for enhanced ship load prediction [21]. However, two major challenges remain for TSHDs:
(1)
Insufficient long-term dependency modeling: Dredging loads are influenced by interactions between variables like suction depth and pump speed. Traditional LSTM with sliding windows struggles to capture hour-scale dependencies.
(2)
Lack of real-time performance: Existing models require annual training data and long prediction cycles (minutes), failing to meet second-level control demands.
To solve these, this paper focuses on workload prediction and dynamic power management for TSHDs, proposing a “prediction-driven dynamic power management mechanism.” The core objective of this power management mechanism is to establish and maintain a dynamic symmetry between the fluctuating power demand of the dredging equipment and the finite generation capacity, ensuring stable and efficient operation. Accurate prediction serves as the foundation for proactively preserving this critical equilibrium. We introduce a dilated convolutional LSTM (Dilated-LSTM) model with strong long-term dependency awareness. Based on accurate predictions, a dynamic strategy is deployed:
If the predicted load is close to or exceeds generation capacity or margin thresholds, the PMS first lowers adjustable loads; if imbalance persists, the Improved Grey Wolf Optimization (IGWO) algorithm disconnects non-critical loads for real-time load-generation matching. The method framework is shown in Figure 1.
The structure of the remaining paper is as follows: Section 2 introduces RNN, LSTM, and improved LSTM load forecasting methods. Section 3 details the construction of the dynamic power adjustment mechanism. In Section 4, the prediction results are comprehensively analyzed, and a typical power dynamic adjustment case is studied. Finally, Section 5 summarizes the paper and outlines potential future research directions.

2. Load Forecasting Method Based on Improved LSTM

2.1. Overview of Classical LSTM Neural Networks

Recurrent Neural Networks (RNNs) [22,23] are a type of neural network architecture that includes recurrent connections. Compared to traditional neural networks, RNNs are designed to retain and utilize information from previous time steps through recurrent loops [24]. The RNN structure is shown in Figure 2.
At time step t, the hidden layer state s t and output state y t are calculated as follows:
s t = f 1 ( U x t + W s t 1 + a ) ,
y t = f 2 ( V s t + b ) ,
where U, W, V is the weight matrix for the input vector; a and b are biases for the hidden and output layers; x t is the input vector; and f 1 and f 2   are the activation functions for the hidden and output layers, respectively.
In theory, RNNs can process time series data of arbitrary length. However, during training, the use of the same weight matrix across time steps often leads to significant error accumulation as the sequence length increases, resulting in gradient explosion or vanishing gradients when processing long sequences.
To address this, gate mechanisms and cell states were introduced into RNNs, giving rise to Long Short-Term Memory (LSTM) networks. LSTM units include a forget gate F t , input gate I t , output gate O t , and a cell state C t . The structure is shown in Figure 3.
The LSTM improves computational efficiency by discarding unimportant information (via the forget gate), updating the cell state using inputs and previous hidden states (via the input gate), and then producing output based on selected internal information (via the output gate).
The cell state update can be represented by Equation (3), which allows for more stable gradient flow during backpropagation and supports learning long-term dependencies. This effectively prevents gradient explosion or vanishing problems that affect traditional RNNs.
C t = f t C t 1 + i t C ^ t ,

2.2. Improved LSTM Algorithm

While LSTM effectively mitigates the gradient issues in traditional RNNs for time series tasks, it still suffers from high computational complexity and a complicated structure. To address these, this paper introduces a dilation mechanism that enables efficient capture of long-term dependencies without the need to stack many LSTM layers. This reduces the number of network parameters and simplifies the architecture. Additionally, layer normalization is applied before each gate in the LSTM to stabilize the training process and alleviate gradient issues.

2.2.1. Dilated Convolution

Dilated convolution increases the receptive field by inserting fixed gaps (dilation rates) within the convolution kernel. Unlike standard convolutions that use densely packed kernel elements, dilated convolutions introduce spatial gaps to allow the kernel to cover and compute information over wider areas. This allows a broader context to be captured without increasing the number of parameters. By properly adjusting the dilation rate, one can balance the size of the receptive field and the computational cost, thereby enhancing model performance. A key hyperparameter, the “dilation rate,” determines the spacing between kernel values during convolution operations [25].
The effective kernel size with dilation is calculated as follows:
k = k + ( d 1 ) × ( k + 1 ) ,
The receptive fields of ordinary convolution and dilated convolution are shown in Figure 4 as follows:
As shown in the figure, the dilation mechanism allows neurons to skip intermediate time steps and directly access longer historical information.
By varying dilation rates, the network efficiently learns multi-scale patterns from sparse data, achieving wider receptive fields than traditional LSTM with equivalent parameters.

2.2.2. Dilated LSTM

In a standard LSTM, the current hidden state H t and cell state C t depend on the previous time step’s values. For example, the input gate in a standard LSTM is calculated as follows:
i t = σ ( W i [ H t 1 , X t ] + b i ) ,
To allow the network to learn longer-range historical information, we introduce a dilation mechanism into the standard LSTM. The current time step t now depends on the hidden and cell states from time step t d , where d is the dilation factor. The improved input gate calculation becomes:
i t = σ ( W i [ H t d , X t ] + b i ) ,
Instead of stacking L standard LSTM layers to capture L-step historical dependencies, the dilation mechanism allows direct access to distant time steps. A layer with dilation factor d has an effective receptive field of approximately (kerne1_size − 1) × dilation_rate + 1. In LSTM, where the implicit kernel size is 1, this expands the receptive field linearly with respect to dilation and depth.

2.2.3. Dilated LSTM with Layer Normalization

When LSTM processes sequential tasks, its recurrent structure causes gradients to propagate across time steps. As training proceeds, input distributions to each LSTM layer change due to parameter updates, making it difficult for subsequent layers to adapt, reducing training efficiency. To address this problem, we introduce Layer Normalization (LN) before each gate in the dilated LSTM. LN normalizes the inputs of each gate at every time step, maintaining stable mean and variance, which prevents extreme values from saturating sigmoid or tanh functions and alleviates internal covariate shift during training.
The improved Dilated-LSTM model proposed is shown in Figure 5. Input data is first normalized and then converted into sequences of 40 time steps using a sliding window. Each time step is processed in parallel by three LSTM units with dilation rates of (1, 3, 7), which symmetrically capture features across different time scales and provide a more comprehensive and balanced representation of the complex, time-varying load patterns. This multi-scale structure enables the network to simultaneously capture local details (rate 1), short-term patterns (rate 3), and long-term trends (rate 7), forming a pyramid-like temporal awareness. Compared to conventional LSTMs, the dilated model achieves over 7 times larger effective receptive fields while maintaining parameter efficiency and mitigating gradient vanishing issues. Unlike batch normalization, our use of layer normalization ensures stable training dynamics. This prevents overfitting when training data is limited. Moreover, the parallel temporal branches extract complementary features simultaneously. This reduces the need for large datasets, as each sub-module specializes in distinct temporal resolutions, effectively “synthesizing” more training signals from fewer samples.
By introducing multi-scale dilation and layer normalization, the improved Dilated-LSTM expands the receptive field while maintaining parameter efficiency. It captures multi-scale temporal features and alleviates gradient issues in long sequence training, making it significantly more effective than traditional LSTMs in tasks with long-term dependencies and complex temporal patterns.

2.2.4. Load Forecasting Workflow

This paper integrates three categories of input data: historical power data, real-time processing parameters, and external environmental data. After normalization and outlier removal using a sliding window of size 40, these inputs are fed into the dilated LSTM prediction model. The model features a three-layer parallel structure to capture short-term fluctuations, mid-term trends, and long-cycle characteristics.
The forecasting process includes the following steps:
(1)
Data Preprocessing: Normalize and clean time-series data from key variables such as mud pump speed, high-pressure water pump speed, slurry concentration, draft, slurry volume, and ship speed, along with historical power data.
(2)
Model Training: The preprocessed data is divided based on time sequence, with the first 85% used as the training set and the remaining 15% as the test set. When the model meets the accuracy requirements or reaches the maximum number of iterations, it is output and saved.
(3)
Model Testing: Feed the test set into the saved model to predict future trends and output performance metrics.
(4)
Evaluation: Evaluate prediction accuracy using performance metrics.
A flowchart of the algorithm is shown in Figure 6.

3. Dynamic Power Adjustment Mechanism

The dynamic power adjustment mechanism is an intelligent power regulation system based on real-time load prediction. Its core consists of two coordinated modules: adjustable load power reduction and hierarchical load shedding. The control logic is designed around a rolling time window architecture. Load forecasting is performed every 10 s, with a 5 s delay allocated for data preprocessing and inference. Once predictions exceed predefined margin thresholds, the system triggers corresponding control actions in under 3 s. Power reduction signals are executed immediately via the PMS interface, while load shedding decisions—if required—are executed within the subsequent 5 s through the IGWO algorithm, ensuring end-to-end latency remains under 20 s.

3.1. Adjustable Equipment Power Reduction Process

When the system detects that the predicted power exceeds the generator’s safe operational threshold, the following protection mechanisms are automatically triggered:
(1)
Activate standby generators to share the load. However, since generator startup requires synchronization time, power reduction is necessary to maintain grid stability before synchronization completes.
(2)
Match current operating conditions from the dredging system with a predefined power limit table, and implement a power reduction strategy for adjustable equipment accordingly.
(3)
Real-time monitoring of power adjustment: if the calculated reduced power falls below the device’s minimum allowed threshold, the system triggers the second-level load shedding mechanism to ensure stable device operation.
The entire regulation process adopts a closed-loop control strategy, achieving smooth power transitions while ensuring equipment safety through multi-parameter collaborative optimization.

3.2. Hierarchical Load Shedding Method Based on Improved Grey Wolf Optimization

3.2.1. Grey Wolf Optimization Algorithm

Inspired by the hunting behavior of gray wolves, Mirjalili et al. proposed the Grey Wolf Optimization (GWO) algorithm in 2014 [26]. GWO models social hierarchy, tracking, encircling, and attacking prey [27].
(1)
Hierarchy: Grey wolves exhibit a four-tier social hierarchy. At the top is the alpha wolf (α), which acts as the leader and decision-maker of the pack, responsible for making group decisions such as hunting and resting. Its commands must be strictly followed by other members. The beta wolf (β) occupies the second level and only obeys the alpha. It assists in decision-making and serves as the primary successor. The delta wolf (δ) ranks third and follows both the alpha and beta. It undertakes supportive tasks such as territory surveillance and caring for the pack. At the bottom is the omega wolf (ω), the most ordinary member of the pack, who must obey the commands of all higher-ranking wolves. In the algorithm, α represents the global best solution, while β and δ represent the next best solutions [28].
(2)
Tracking Prey: Wolves adjust their distance to prey based on a random factor A , enhancing global search. If ∣A∣ > 1, wolves move away from prey; If ∣A∣ < 1, they move toward prey.
(3)
Encircling Prey: Wolves move closer and surround prey. The mathematical expression is:
D = C X p ( t ) X ( t ) ,
X ( t + 1 ) = X p ( t ) A D ,
where X(t) is current wolf position; Xp(t) is prey position; D is the distance between wolf and prey; A and C are coefficient vectors.
A = 2 a r 1 a ,
C = 2 r 2 ,
Here, a decreases from 2 to 0 during the iterations, and r 1 , r 2   are random vectors with values ranging from 0 to 1.
(4)
Hunting Behavior: Wolves α, β, and δ lead the hunt by estimating prey’s position. Other wolves update their positions based on the leaders’ positions, gradually approaching the global optimum.

3.2.2. Improved Grey Wolf Optimization

This study proposes two optimization strategies for GWO algorithm: (1) Adjusting the convergence factor to balance global exploration and local exploitation, thereby improving convergence accuracy; (2) Dynamically adjusting the exploration weights of the wolf pack to accelerate convergence and enhance precision, thus improving the algorithm’s engineering applicability.
(1)
Nonlinear Adjustment Strategy for the Convergence Factor
The global exploration and local exploitation abilities of GWO algorithm are highly dependent on the value of parameter A: When ∣A∣ > 1, the wolves expand their search range (global exploration); When ∣A∣ < 1, the wolves narrow the search and focus on local optimization (local exploitation). The value of A is determined by the convergence factor α. Therefore, the convergence factor plays a crucial role in balancing exploration and exploitation within the algorithm. The standard linear convergence factor struggles to meet the dual requirements of wide exploration in the early iterations and fast convergence in the later ones. To address this, a nonlinear convergence factor α is proposed:
In the early stages of iteration, the nonlinear convergence factor decreases slowly, allowing parameter A to remain large for a longer period, which enhances global exploration. In the later stages of iteration, the nonlinearly decreasing convergence factor accelerates the process, causing parameter A to decrease rapidly, thereby enhancing the algorithm’s local exploitation capability. This strategy improves the dynamic characteristics of the convergence factor, thereby optimizing the balance between global and local search in the GWO algorithm [29].
(2)
Dynamic Proportional Weighting Strategy
In the standard GWO algorithm, the three leading wolves—alpha (α), beta (β), and delta (δ)—are treated as equally important when updating the positions of other wolves, and the update process is performed statically. While this method works well for many conventional optimization problems, it often struggles to effectively solve high-dimensional, complex, and multi-modal problems. To address this issue, this paper proposes a dynamic proportional weighting strategy based on Euclidean step distance. The formulation is as follows. This strategy dynamically adjusts the weighting ratios in the position update of wolves by calculating the step-wise Euclidean distances of the α,β, and δ wolves relative to the ω wolf. The wolf that is closest to the ω wolf is assigned the highest weight, the second closest receives a moderate weight, and the farthest is given the lowest weight.
i f X 2 > X 3 > X 1   o r   X 3 > X 2 > X 1   o r   X 3 > X 1 > X 2 W 1 = X 1 + X 2 + X 3 X 1 W 2 = X 1 + X 2 + X 3 X 2 W 3 = X 1 + X 2 + X 3 X 3 X i ( t + 1 ) = W 1 X 1 + W 2 X 2 + W 3 X 3 W 1 + W 2 + W 3 ,
The flowchart of improved grey wolf algorithm is shown in Figure 7.

3.3. Hierarchical Load Shedding Method

This model’s core goal is to determine the optimal shedding strategy based on real-time power conditions and load characteristics, achieving maximum power relief at the lowest cost.
(1)
Load Modeling and Indicator Definition
Assume there are n shed-able loads, denoted as set L:
L = L 1 , L 2 , . . . , L n ,
Each load has attributes:
L i = P i , I i , C i , S i , t i ,
In this equation, P i represents the rated power of each component, while I i denotes the importance coefficient ranging from 0 to 1, where a higher value indicates a greater importance to the system. The shedding cost coefficient C i reflects the operational impact when load shedding occurs, and S i determines whether shedding is permitted (1 for allowed, 0 for forbidden). Additionally, t i specifies the response delay time required for each component’s reaction.
(2)
Load Shedding Priority Function Design
Based on practical engineering experience, the load shedding order should prioritize the following factors: rated power, operational importance coefficient, and shedding cost coefficient. Devices with smaller rated power are more suitable for early-stage shedding, as this helps avoid sudden large power fluctuations that could impact system stability. A lower operational importance coefficient indicates that the load is less critical to the system, and thus it should be given higher shedding priority. A lower shedding cost coefficient means that disconnecting the load will have less impact on the system, so such loads should also be prioritized for shedding. The core goal of this model is to determine the optimal shedding strategy that minimizes the operational impact (cost) while strictly adhering to the power balance constraint, thereby restoring the violated symmetry between supply and demand as efficiently as possible.
A multi-factor shedding score function is defined:
W i = α 1 I i + ε + β C i + γ 1 P i ,
where α , β , γ are configurable weights. Lower W i indicates higher shedding priority.
The load shedding model sorts all shedable devices based on their scores, resulting in a prioritized load shedding sequence.
(3)
Shedding Combination Optimization Model
Assume current power deficit is P d f . The goal is to minimize the overall impact of shedding while meeting power compensation needs. Define control decision variable:
x i = 1 , t o   s h e d   t h e   l o a d   L i 0 , n o t   t o   s h e d ,
Objective function:
m i n i = 1 n   x i ( λ 1 I i + λ 2 C i ) ,
Here, λ 1 and λ 2 are weighting factors, which guide the model to, under the condition of meeting the power demand, prioritize shedding loads that are less important and have lower shedding costs.
The following are the constraints of the optimization algorithm:
  • Power Balance:
i = 1 n   x i P i P d f ,
  • Shedding Feasibility:
x i S i , i ,
  • Response Time:
t i T m a x ,
  • Shedding Count Constraint:
i = 1 n   N m a x ,
(4)
Load Priority
Table 1 lists the load priority levels.
The hierarchical load shedding model developed in this section uses load power, task criticality, and shedding cost as core parameters. Combined with the real-time power deficit, it defines an optimized shedding function and control constraints that are both flexible and practically applicable in engineering scenarios.

4. Case Study

4.1. Data Source

This study employs three datasets in the experiments, namely the operation dataset of the left mud pump of TSHD, the high-pressure flushing dataset, and the total power dataset.
The datasets were collected from actual operations of the trailing suction hopper dredger “Changjing 6.” They include key variables such as operating power, dredge pump speed, slurry concentration, midship draft, and soil type. The datasets span from 00:00 on 13 May 2024, to 23:59 on 13 May 2024, with recordings taken every 5 s, resulting in a total of 17,280 data points.
For the experiments in this study, data from one complete operational cycle was extracted for all three datasets, with a 7:3 split ratio between training and test sets.
To verify the robustness of the proposed Dilated-LSTM model under realistic working conditions, we introduced synthetic noise and missing values into the input sensor data. Gaussian noise with a standard deviation of 5% of the original signal range was added to simulate sensor fluctuations.

4.2. Analysis of Load Prediction Experiment Results

4.2.1. Model Evaluation Indicators

To comprehensively evaluate the predictive performance of the model, this study employs three commonly used metrics in time series forecasting to assess the model’s predictive capability. During the forecasting process, assuming there are n samples, where y i represents the actual value of the i-th sample and   y ^ i denotes the predicted value of the i-th sample generated by the model, the following metrics are interpreted and analyzed:
(1)
Coefficient of Determination (R2)
The coefficient of determination (R2) describes the goodness-of-fit of the model to the data, with a value range of [−∞, 1]. An R2 value closer to 1 indicates better predictive performance of the model, while an R2 value approaching 0 suggests poor predictive capability. The formula is given as follows:
R 2 = 1 i = 1 n   ( y i y ^ i ) 2 i = 1 n   ( y i y ¯ i ) 2 ,
(2)
Mean Absolute Error (MAE)
The Mean Absolute Error (MAE) calculates the average absolute difference between the predicted values and the actual values. For model evaluation, a smaller MAE indicates better performance, as it reflects closer alignment between predicted and actual data on average. The formula is defined as follows:
M A E = 1 n i = 1 n   y i y ^ i ,
(3)
Mean Absolute Percentage Error (MAPE)
The Mean Absolute Percentage Error (MAPE) quantifies the average percentage difference between predicted and actual values, providing a relative measure of prediction error scaled by the magnitude of the true values. Generally, a lower MAPE indicates better model performance, as it reflects smaller deviations between predictions and ground truth, thereby implying higher predictive accuracy. The formula is expressed as follows:
M A P E = 100 % n i = 1 n   y i y ^ i y i

4.2.2. Experimental Analysis

In this section, comparative experiments were conducted to validate the superiority of the proposed model in sample prediction. The LSTM and RNN models were selected for comparison, with their network parameters consistent with those listed in Table 2. For different equipment and various dimensions, we trained deep learning networks separately for prediction. Comparative graphs of left mud pump power (representative of dredge pump operations as both left and right pumps exhibit similar operational characteristics), high-pressure flushing power, and total power prediction are presented. As these parameters are critical factors affecting dredging operation efficiency and energy consumption, their prediction comparison plots can effectively demonstrate the performance differences among various models in power prediction, thereby sufficiently illustrating the effectiveness and advantages of the proposed method.
All models employed the Huber loss function as their objective function and were optimized using the Adam algorithm (β1 = 0.9, β2 = 0.999, ε = 10−8) with a learning rate of 0.003. Each model was trained for up to 100 epochs, with early stopping applied if the validation loss did not improve for 10 consecutive epochs, to prevent overfitting. The activation functions were carefully selected for different components: LSTM gates utilized sigmoid functions for forget/input/output gate controls to maintain [0, 1] gating behavior, while tanh activation governed cell state updates to ensure smooth gradient flow. For feature transformation in dense layers, ReLU activation was adopted to enhance nonlinear representation capability, and a linear activation function was applied at the output layer to accommodate the regression task’s continuous value prediction requirements. After completing the model configuration, predictions were conducted for key parameters, including the left mud pump power, high-pressure flushing power, and total generated power. This study adopts a multivariate time-aligned input architecture that precisely captures the dynamic variations in historical power sequences across multiple consecutive time steps through dedicated power channels. This architecture simultaneously integrates processing parameter channels to acquire multidimensional operational parameters (including critical variables such as rotational speed and slurry concentration) within the same temporal window. Furthermore, a specially designed environmental channel incorporates time-delay embedding techniques to handle slowly varying environmental parameters like water depth and soil type. This multi-scale fused input structure not only ensures the real-time responsiveness to high-frequency dynamic parameters but also accounts for the long-term effects of slowly changing environmental factors, thereby providing the model with comprehensive and harmonized multi-source spatiotemporal feature representation.
The predictive performance of the model plays a critical role in both equipment-level and system-level dimensions. At the equipment level, the prediction model analyzes real-time operational status, load characteristics, and environmental parameters of individual devices, providing precise start-stop decision support for power equipment with automatic power reduction capabilities. This ensures optimal operation within the most efficient power range. At the system level, the model generates system-wide power demand predictions based on macro-level data, including grid load distribution, generation output, and network topology. These predictions directly trigger the PMS (Power Management System)’s dynamic power adjustment mechanism. At the equipment level, our analysis focuses on two critical power parameters: (1) the left mud pump power (Figure 8), which reflects the main excavation energy consumption, and (2) the high-pressure flush power (Figure 9), representing the cleaning system’s energy demand. At the system level, Figure 10 presents the total generated power prediction, which integrates all subsystem demands for comprehensive energy management.
As evident from the figures, the RNN demonstrates suboptimal performance in predicting left mud pump power, high-pressure flushing power, and total power. While RNNs can capture basic temporal trends, they exhibit significant limitations in modeling long-term dependencies, often resulting in prediction lag when handling complex nonlinear fluctuations during continuous operations. In contrast, LSTM achieves notably higher prediction accuracy than RNN, generally approximating the variation trends. This improvement stems from two key factors: on the one hand, LSTMs better capture the dynamic power variation patterns during dredging operations under complex working conditions; on the other hand, the incorporation of forget gates, input gates, and output gates addresses the vanishing gradient problem, thereby enhancing long-term sequence dependency learning. However, prediction inaccuracies persist, as exemplified by the failure to accurately predict certain peak values in Figure 8b and the oversimplified curve representation in Figure 9b, which inadequately captures detailed power curve characteristics, indicating substantial room for improvement.
By integrating improved dilated convolution with LSTM, the proposed Dilated-LSTM demonstrates superior performance over the other two models in predicting left mud pump power, high-pressure flushing power, and shaft-generated power. The model not only accurately captures the true variation trends of parameters but also achieves excellent fitting between actual and predicted curves. While maintaining parameter efficiency, the Dilated-LSTM effectively captures multi-scale temporal features and mitigates gradient vanishing issues during long-sequence training, thereby enabling more precise data predictions. These results validate the significance of the load forecasting method proposed in this study.
To objectively evaluate the predictive performance of different models, quantitative comparisons were conducted using three metrics: R2, MAE, and MAPE. The specific numerical results for each prediction model are presented in the Table 3, Table 4 and Table 5.
Experimental results demonstrate that the Dilated LSTM consistently outperforms other models across all prediction tasks. For the left dredge pump power prediction, the Dilated LSTM achieved an R2 of 0.99, surpassing RNN and LSTM by 21% and 3%, respectively. Additionally, it reduced MAE and MAPE by 75.3% and 83.3%, indicating significantly superior error control compared to conventional models. Similarly, in high-pressure flushing power and total generated power prediction, the Dilated LSTM maintained a stable R2 of 0.99, with MAE reductions exceeding 62% and MAPE consistently below 1.07%, further validating its exceptional prediction accuracy. When considering the average performance across all three metrics, the Dilated LSTM improved R2 by 20.7% and 10.3% over RNN and LSTM, respectively, while reducing MAE by 74.8% and 54.4%, and optimizing MAPE by 80.1% and 45.2% on average.
These quantitative results conclusively demonstrate that the Dilated LSTM, through its enhanced ability to capture temporal features, achieves superior stability and adaptability in load forecasting, highlighting its significant potential for engineering applications.

4.2.3. Ablation Study on Dilation Factor Selection

To evaluate the impact of different dilation rate configurations on the performance of the proposed Dilated-LSTM model, we conducted an ablation study by comparing several combinations of dilation factors. The experimental subject is the left mud pump. Specifically, we experimented with the following configurations: Group 1: Single dilation factor (1); Group 2: Small-scale factors (1, 3); Group 3: Proposed (1, 3, 7); Large-scale factors (1, 7, 15).
Each configuration was used to train the model under identical settings, including the same optimizer, learning rate, and number of epochs. The models were evaluated on a validation set using R2, MAE, and MAPE as the primary metrics.
The ablation study results demonstrate the critical impact of dilation factor selection on model performance. As shown in Table 6, the proposed (1, 3, 7) combination achieves superior performance (R2 = 0.99, MAE = 8.96 kW, MAPE = 0.95%) compared to other configurations. The single dilation factor (1) (equivalent to standard LSTM) exhibits the weakest performance (R2 = 0.91, MAE = 18.6 kW), confirming its limitation in capturing long-term dependencies. While the (1, 3) combination shows improved accuracy (R2 = 0.96), it still underperforms the proposed method by 3% in R2, suggesting the necessity of incorporating larger dilation scales. Notably, the (1, 7, 15) configuration shows degraded performance (MAE = 10.4 kW) compared to (1, 3, 7), indicating that excessive dilation rates may overlook important local features. These results validate that the (1, 3, 7) combination optimally balances multi-scale temporal feature extraction, where d = 1 captures instantaneous fluctuations, d = 3 models medium-term operational patterns, and d = 7 identifies long-duration load trends characteristic of dredging cycles. The 52% reduction in MAE compared to single-factor LSTM demonstrates the effectiveness of this hierarchical dilation design in addressing TSHD’s complex load prediction challenges.

4.3. Implementation of Dynamic Power Regulation Scheme

The multidimensional prediction results provide significant operational guidance for subsequent dynamic power adjustment. At the equipment level, the power predictions serve as the basis for automatic power reduction, enabling adaptive adjustment of adjustable equipment’s power output according to forecasted variations. At the system level, when predicted power values exceed predefined thresholds, they trigger dynamic power adjustments, including power reduction and hierarchical load shedding. Specifically, based on load forecasting results, the Power Management System (PMS) initiates automatic power reduction through adjustable loads when projected electricity demand exceeds available generator capacity or approaches preset margin thresholds, thereby maintaining grid stability. Should power imbalance risks persist after power reduction measures, the system will automatically shed non-critical loads through the IGWO algorithm, achieving dynamic matching between power supply capacity and load demand. This two-stage protection mechanism effectively prevents generator overload while ensuring uninterrupted operation of critical loads.

4.3.1. Implementation of Power Reduction Scheme

The following experiment presents a case study of dynamic power adjustment triggered when load forecasting indicates demand exceeding available generator capacity. A typical power deficit scenario is simulated: increased operational loads due to greater dredging depth, slurry concentration, or discharge distance elevate the predicted power demand for high-pressure flushing pumps and mud pumps. The forecasted load distribution is shown in Table 7.
With the current generator set’s maximum available power for operational equipment being 7600 kW, this results in a 1000 kW power deficit. To maintain grid stability, adjustable equipment must undergo controlled power reduction, while ensuring compliance with the adjustable construction equipment’s power limitations as specified in Table 8 for the given operational scenario.
Based on the equipment priority levels specified in Table 2, we implemented corresponding power reduction for the mud pumps, high-pressure flushing pumps, and propulsion equipment to create time for parallel connection of standby generators. The detailed power adjustment scheme is presented in Table 9.

4.3.2. Implementation of the Hierarchical Load Shedding Scheme

In the scenario where the power deficit reaches 1400 kW due to increased demand from other essential equipment, and power reduction becomes insufficient to maintain system stability, given minimum operational power requirements, hierarchical load shedding must be implemented. This operation is subject to two additional constraints: (1) mud pumps and high-pressure flushing systems must not be shed, and (2) the total number of unloaded devices cannot exceed seven.
This experiment validates the effectiveness of the IGWO-based hierarchical load shedding strategy in a trailing suction hopper dredger’s power system. Using MATLAB R2024a, we compare the dynamic response of five algorithms (standard GWO, IGWO, SSA, SA, and WOA) under power deficit conditions, evaluating their load-shedding decision accuracy and convergence speed. The study considers the following equipment: thrusters, flushing systems, propulsion motors, mud pumps, air conditioning systems, kitchen equipment, and entertainment cabin loads. The parameters were set to a wolf pack size of N = 20 and a maximum of T = 90 iterations using adaptive convergence factors. Figure 11 presents the simulation optimization iteration curve.
As illustrated in Figure 12, all five optimization algorithms (SSA, IGWO, GWO, SA, WOA) exhibit a decreasing trend in objective function values during the iterative process. The IGWO and GWO algorithms demonstrate the fastest initial convergence characteristics, indicating that the grey wolf algorithm family possesses significant early-stage convergence advantages. These algorithms complete the primary optimization process within 20 iterations, reducing the objective function value to approximately 100 with minimal oscillation, thereby validating the efficient exploration capability of grey wolf algorithms under swarm intelligence collaboration mechanisms.
In contrast, SA exhibits the slowest performance, only beginning to converge after 30 iterations, highlighting the efficiency limitations of simulated annealing in continuous optimization problems. Notably, both WOA and SSA converge to a suboptimal value of 150 during the optimization phase, indicating susceptibility to local optima. Comparatively, IGWO shows superior performance in both convergence precision and stability, with these comparative results fully verifying the enhancement effects achieved through IGWO’s improvement strategies.
The uninstallation plan is shown in Table 10.
The IGWO-based optimization results demonstrate that under a 1400 kW power deficit scenario caused by high-pressure flushing pump activation and full-load operation of cabin pumps, the system generates an optimal load-shedding scheme that simultaneously unloads the air conditioning system, other types of pumps, side thrusters, lighting systems, kitchen equipment, and entertainment cabin loads. This scheme releases exactly 1400 kW of power, completely covering the power deficit while preserving adequate safety margins. The solution strictly complies with all operational constraints by maintaining high-pressure flushing systems and mud pumps as required, while limiting the total number of unloaded devices to seven to satisfy system restrictions.
From an engineering perspective, although unloading the thruster may impact vessel maneuverability, this compromise remains acceptable during low-speed dredging operations. Moreover, the hierarchical load-shedding strategy ensures transient grid stability during the power adjustment process. Overall, the proposed scheme demonstrates excellent performance in terms of power compensation accuracy, constraint compliance, optimization efficiency, and operational practicality, thereby validating the effectiveness and applicability of the IGWO algorithm for hierarchical load-shedding decision-making in marine power systems.
In terms of computational time, the bar chart and Table 11 data reveal significant differences among the five optimization algorithms. The SA algorithm exhibited the highest computational burden at 0.47 s, substantially exceeding other methods. In contrast, GWO demonstrated optimal efficiency with merely 0.03 s. The proposed IGWO required 0.23 s—while this represents a 0.2 s increase compared to standard GWO, our earlier analysis confirms this additional time cost yields a 51% improvement in optimization accuracy. WOA and SSA showed intermediate performance at 0.04 and 0.1 s, respectively. Overall, IGWO achieves an excellent balance between computational efficiency and solution quality, maintaining both acceptable real-time performance for most engineering applications and superior optimization precision.

4.4. Power Regulation Human–Machine Interface (HMI)

To facilitate real-time prediction and control, the proposed system was deployed on an onboard embedded computing platform. The hardware stack includes an industrial-grade ARM Cortex-A72 quad-core processor with 8 GB RAM and solid-state storage, running a lightweight Linux-based operating system (Ubuntu Core). The software stack comprises a Python 3.13-based inference engine optimized with TensorRT acceleration for neural network deployment, and real-time control scripts interfacing with the vessel’s power management system via CAN bus and Modbus protocols. This hardware-software configuration ensures low-latency response, energy efficiency, and reliable performance under maritime conditions. To complement the aforementioned load forecast-based dynamic power adjustment method, this study also developed an HMI. Figure 13 shows the 24 h power prediction trends for the mud pump and the total power generation equipment, respectively. By processing historical operating conditions and operational parameters, the system predicts the power variation in the equipment during subsequent operation.
Figure 14 shows the hierarchical load shedding interface, which displays the total available power from the generator set, the power deficit, and the list of loads that can be shed. The order of load shedding is determined by the importance of each load: critical equipment is disconnected from the grid only when the available power is extremely low. In special cases, the user can manually choose to uninstall the device in this interface.

5. Conclusions

This study addresses the challenges of highly time-varying equipment loads and complex power management in trailing suction hopper dredgers by proposing an innovative “prediction-driven dynamic power management mechanism.” The mechanism features two key innovations: (1) a dilated convolutional LSTM model with enhanced load sequence prediction, improving accuracy and effectively modeling complex temporal dependencies; (2) a hierarchical dynamic power management strategy that prioritizes adjustable load reduction to maintain grid balance and selectively implements non-essential load shedding via an improved grey wolf optimization algorithm when necessary, achieving dynamic generation-load matching. Experimental results demonstrate the mechanism’s effectiveness in preventing generator overloads and power failures while enhancing system stability and critical load reliability, providing a practical solution for intelligent energy management in vessels under complex operating conditions.
Future research may explore three optimization directions:
(1)
System integration: Coordinating the mechanism with shipboard integrated energy management systems (including energy storage and renewable energy) to develop multi-objective energy efficiency platforms.
(2)
Engineering validation: Conducting long-term onboard deployments to verify model robustness and adapting the solution for different dredger types (e.g., cutter suction dredgers).
(3)
Edge computing applications: Investigating lightweight model deployment on edge devices to reduce latency and enhance real-time performance.
This research provides important references for maritime intelligentization and green development, with potential to promote standardized applications in offshore engineering. The proposed framework establishes a technical foundation for next-generation smart dredging systems while demonstrating adaptability for broader marine energy management scenarios.

Author Contributions

Conceptualization, S.Y. and C.L.; methodology, Z.X.; software, S.S.; validation, Z.X., R.T. and Z.H.; formal analysis, Z.H.; investigation, Z.X.; writing—original draft preparation, R.T. and S.S.; writing—review and editing, S.Y., C.L. and Z.X. All authors have read and agreed to the published version of the manuscript.

Funding

This study was funded by the Zhenjiang Science and Technology Program (grant number: JC2024021).

Data Availability Statement

All data available regarding this study are shared in this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wu, L. Energy efficiency optimization, integrated control, and energy-saving emission reduction of marine propulsion systems. Ship Mater. Mark. 2024, 32, 85–87. [Google Scholar]
  2. Cheng, T.; Lu, Q.; Kang, H.; Fan, Z.; Bai, S. Productivity Prediction and Analysis Method of Large Trailing Suction Hopper Dredger Based on Construction Big Data. Buildings 2022, 12, 1505. [Google Scholar] [CrossRef]
  3. Zhang, B.; Dong, M.H.; Zhao, C. Wave Load and Total Strength Analysis of Trailing Suction Hopper Dredger. China Water Transp. 2025, 3, 71–74. [Google Scholar] [CrossRef]
  4. Liu, H.S. From Technical Drawing Imitation to Independent Innovation: A Retrospective on the Development of Engineering Vessel Research at the 708th Research Institute. In Proceedings of the 2008 China Shipbuilding Industry Development Forum & 60th Anniversary of Shipbuilding of China; The Chinese Society of Naval Architects and Marine Engineers: Beijing, China, 2008; pp. 97–105. [Google Scholar]
  5. Pan, Y.; He, H. Dynamic power limitation of dredging power grids in trailing suction hopper dredgers. Shipbuild. Technol. 2020, 2, 13–16. [Google Scholar]
  6. Wang, X. A Power System and Control Method for Trailing Suction Hopper Dredgers: China. CN Patent 119305705A, 11 October 2024. [Google Scholar]
  7. Gao, D.; Pan, K.; Wang, T. Research on power load prediction model of hybrid ships. Control. Eng. 2019, 26, 362–367. [Google Scholar]
  8. Zhao, M.; Fan, Y.; Sun, H. Multivariate chaotic local prediction of electric load in electric propulsion ships. J. Syst. Simul. 2008, 11, 2797–2799+2805. [Google Scholar]
  9. Bocheński, D. Selection of main engines for hopper suction dredgers with the use of probability models. Pol. Marit. Res. 2018, 25, 70–76. [Google Scholar] [CrossRef]
  10. Braaksma, J.; Babuska, R.; Klaassens, J.B.; Keizer, C. A computationally efficient model for predicting overflow mixture density in a hopper dredger. Terra Aqua 2007, 106, 16. [Google Scholar]
  11. Vu, T.; Dhupia, J.; Ayu, A.; Kennedy, L.; Adnanes, A. Optimal power management for electric tugboats with unknown load demand. In Proceedings of the 2014 American Control Conference, Portland, OR, USA, 4–6 June 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 1578–1583. [Google Scholar]
  12. Farahat, M.; Talaat, M. Short-term load forecasting using curve fitting prediction optimized by genetic algorithms. Int. J. Energy Eng. 2012, 2, 23–28. [Google Scholar] [CrossRef]
  13. Abumohsen, M.; Owda, A.; Owda, M. Electrical load forecasting using LSTM, GRU, and RNN algorithms. Energies 2023, 16, 2283. [Google Scholar] [CrossRef]
  14. Deng, Y.; Wang, X.; Liao, Y. ASA-Net: Adaptive sparse attention network for robust electric load forecasting. IEEE Internet Things J. 2023, 11, 4668–4678. [Google Scholar] [CrossRef]
  15. Liu, M.; Qin, H.; Cao, R.; Deng, S. Short-term load forecasting based on improved TCN and DenseNet. IEEE Access 2022, 10, 115945–115957. [Google Scholar] [CrossRef]
  16. Xin, Z. Stock Price Prediction Based on LSTM and Transformer Models. Ph.D. Thesis, Shandong University, Jinan, China, 2022. [Google Scholar]
  17. Guo, C.; Wang, X.; Wang, B.; Wang, J. Short-term power load forecasting based on multi-layer fusion neural network model. Comput. Mod. 2021, 10, 94–99+106. [Google Scholar]
  18. Giacomazzi, E.; Haag, F.; Hopf, K. Short-term electricity load forecasting using the temporal fusion transformer: Effect of grid hierarchies and data sources. In Proceedings of the 14th ACM International Conference on Future Energy Systems, Orlando, FL, USA, 20–23 June 2023; pp. 353–360. [Google Scholar]
  19. Chen, G.; Li, Z.; Ye, X. Embedded CPU load prediction simulation in cloud computing environments. Comput. Simul. 2023, 40, 492–495+523. [Google Scholar]
  20. Xie, T.; Deng, L.; Cao, Z.; Liang, C.; Li, C. Cloud platform host load prediction method based on LSTM-Z. Comput. Eng. Des. 2023, 44, 2561–2568. [Google Scholar]
  21. Shu, F.; Wang, Y.; Dai, X.; Liu, W. Load forecasting of electric propulsion ships based on BOA-LSSVM. Ship Sci. Technol. 2023, 45, 159–166. [Google Scholar]
  22. Rumelhart, D.; Hinton, G.; Williams, R. Learning Internal Representations by Error Propagation; ADA164453; Institute for Cognitive Science, University of California, San Diego (UCSD): San Diego, CA, USA, 1985. [Google Scholar]
  23. Jordan, M. Serial order: A parallel distributed processing approach. In Advances in Psychology; North-Holland: Amsterdam, The Netherlands, 1997; Volume 121, pp. 471–495. [Google Scholar]
  24. Graves, A.; Mohamed, A.; Hinton, G. Speech Recognition with Deep Recurrent Neural Networks. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 6645–6649. [Google Scholar]
  25. He, X. Research on Image Inpainting Algorithm Based on Attention Mechanism and Wavelet Decomposition. Ph.D. Thesis, Chengdu University of Technology, Chengdu, China, 2020. [Google Scholar]
  26. Mirjalili, S.; Mirjalili, S.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  27. Xue, X. Green Scheduling Optimization for Prefabricated Building Component Production in Company A. Ph.D. Thesis, Harbin University of Science and Technology, Harbin, China, 2024. [Google Scholar] [CrossRef]
  28. Zhou, Z. Research on Feature Selection Methods Based on Swarm Intelligence Optimization Algorithms. Ph.D. Thesis, Hebei GEO University, Shijiazhuang, China, 2024. [Google Scholar]
  29. Chen, J. Research on Stamping Forming Modeling Based on a Multi-Strategy Improved Grey Wolf Optimization Algorithm. Ph.D. Thesis, Zhejiang University of Science and Technology, Hangzhou, China, 2023. [Google Scholar]
Figure 1. Power management system framework.
Figure 1. Power management system framework.
Symmetry 17 01446 g001
Figure 2. RNN structure diagram.
Figure 2. RNN structure diagram.
Symmetry 17 01446 g002
Figure 3. LSTM neural network structure diagram.
Figure 3. LSTM neural network structure diagram.
Symmetry 17 01446 g003
Figure 4. Receptive field comparison chart.
Figure 4. Receptive field comparison chart.
Symmetry 17 01446 g004
Figure 5. Improve Dilation-LSTM model.
Figure 5. Improve Dilation-LSTM model.
Symmetry 17 01446 g005
Figure 6. Power predicting flowchart.
Figure 6. Power predicting flowchart.
Symmetry 17 01446 g006
Figure 7. Improved gray wolf algorithm flowchart.
Figure 7. Improved gray wolf algorithm flowchart.
Symmetry 17 01446 g007
Figure 8. Comparison of left mud pump power prediction of three models.
Figure 8. Comparison of left mud pump power prediction of three models.
Symmetry 17 01446 g008
Figure 9. Comparison of high-pressure flush power prediction of three models.
Figure 9. Comparison of high-pressure flush power prediction of three models.
Symmetry 17 01446 g009
Figure 10. Comparison of total power generation power prediction of three models.
Figure 10. Comparison of total power generation power prediction of three models.
Symmetry 17 01446 g010
Figure 11. Convergence Curves of Different Optimization Algorithms for Load Shedding Optimization.
Figure 11. Convergence Curves of Different Optimization Algorithms for Load Shedding Optimization.
Symmetry 17 01446 g011
Figure 12. Response Time Comparison of Different Optimization Algorithms for Load Shedding Optimization.
Figure 12. Response Time Comparison of Different Optimization Algorithms for Load Shedding Optimization.
Symmetry 17 01446 g012
Figure 13. Load prediction interface diagram.
Figure 13. Load prediction interface diagram.
Symmetry 17 01446 g013
Figure 14. Hierarchical load shedding interface.
Figure 14. Hierarchical load shedding interface.
Symmetry 17 01446 g014
Table 1. Load Priority Levels.
Table 1. Load Priority Levels.
Importance LevelTSHD Electrical Load Types
Extremely HighMud pumps, high-pressure water pumps, propulsion, thrusters
HighLighting, engine room auxiliaries, hydraulic pumps, fans
MediumOther pumps (freshwater, seawater cooling, etc.)
LowKitchen equipment, recreational cabin loads
Table 2. Network parameter table.
Table 2. Network parameter table.
Network NameParameter NameValue
Dilated-LSTMTime Window Size40
Dilation Factors(1, 3, 7)
Number of Hidden Units16
Number of Iterations1000
Learning Rate0.005
Training Set Proportion85%
Normalization MethodLayer Normalization
Table 3. Power prediction results of left mud pump.
Table 3. Power prediction results of left mud pump.
Indicators/Prediction ModelsRNNLSTMDilated-LSTM
R20.780.960.99
MAE36.2512.448.96
MAPE5.681.790.95
Table 4. High-pressure flushing power prediction results.
Table 4. High-pressure flushing power prediction results.
Indicators/Prediction ModelsRNNLSTMDilated-LSTM
R20.810.860.99
MAE32.3420.147.66
MAPE4.762.351.07
Table 5. Total power generation power prediction results.
Table 5. Total power generation power prediction results.
Indicators/Prediction ModelsRNNLSTMDilated-LSTM
R20.710.860.99
MAE25.9725.536.83
MAPE3.251.870.96
Table 6. The ablation study results of left mud pump.
Table 6. The ablation study results of left mud pump.
Dilation Factors(1)(1, 3)(1, 3, 7)(1, 7, 15)
R20.910.960.990.97
MAE18.612.826.8310.44
MAPE2.311.640.961.25
Table 7. Power prediction meter.
Table 7. Power prediction meter.
Adjustable EquipmentPredicted Power/KW
2 Side thrusters600
2 high-pressure flushing1400
2 propulsion motors2200
2 Mud pumps4200
Other essential equipment200
Total demand8600
Table 8. Adjustable equipment power limit meter.
Table 8. Adjustable equipment power limit meter.
Adjustable EquipmentMaximum Allowable Power/KWMinimum Allowable Power/KW
No. 1/2 Side thruster600100
No. 1/2 high-pressure flushing pump1500600
No. 1/2 propulsion motor30001000
No. 1/2 Mud pump40002000
Table 9. Adjustable equipment meter.
Table 9. Adjustable equipment meter.
Adjustable EquipmentOriginal Power/KWAdjusted Power/KWProvided Power/KW
2 Side thrusters600200400
2 high-pressure flushing pumps14001200200
2 propulsion motors22002000200
2 Mud pumps42004000200
Table 10. Uninstallation plan table.
Table 10. Uninstallation plan table.
Serial NumberUnloading Power/kW
Air conditioning system280
Kitchen equipment and entertainment cabin loads200
Other types of pumps200
Lighting load120
2 Side thrusters600
Total power1400
Table 11. Comparison table of response times for each algorithm.
Table 11. Comparison table of response times for each algorithm.
AlgorithmSAGWOIGWOWOASSA
Computational time/s0.470.030.230.040.1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xia, Z.; Hong, Z.; Tang, R.; Song, S.; Li, C.; Ye, S. Research on Power Load Prediction and Dynamic Power Management of Trailing Suction Hopper Dredger. Symmetry 2025, 17, 1446. https://doi.org/10.3390/sym17091446

AMA Style

Xia Z, Hong Z, Tang R, Song S, Li C, Ye S. Research on Power Load Prediction and Dynamic Power Management of Trailing Suction Hopper Dredger. Symmetry. 2025; 17(9):1446. https://doi.org/10.3390/sym17091446

Chicago/Turabian Style

Xia, Zhengtao, Zhanjing Hong, Runkang Tang, Song Song, Changjiang Li, and Shuxia Ye. 2025. "Research on Power Load Prediction and Dynamic Power Management of Trailing Suction Hopper Dredger" Symmetry 17, no. 9: 1446. https://doi.org/10.3390/sym17091446

APA Style

Xia, Z., Hong, Z., Tang, R., Song, S., Li, C., & Ye, S. (2025). Research on Power Load Prediction and Dynamic Power Management of Trailing Suction Hopper Dredger. Symmetry, 17(9), 1446. https://doi.org/10.3390/sym17091446

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop