# Evaluation of Different Deep-Learning Models for the Prediction of a Ship’s Propulsion Power

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Data Collection and Preparation

#### 2.1. Data Acquisition

#### 2.2. Data Pre-Processing Pipeline

#### 2.2.1. Feature Selection

- At first, a significance level was selected. In most cases, a 5% significance level is enough.
- Secondly, the model was fitted with all the features left from the intuitive feature exclusion.
- The feature with the highest p-value was identified.
- If the p-value of this feature is greater than the significance level selected in the first step, this feature was removed from the dataset. This procedure was repeated until the highest p-value of this specific subset was lower than the significance level. Once the highest p-value of this subset of features was less than the significance level, the feature-selection process ended.

#### 2.2.2. Outlier Removal

- Select a feature of the dataset that is deemed to be of high significance to our study (for instance, the main engine RPM was chosen as our primary feature);
- The rest of the dataset is split into intervals based on the primary feature’s values within a specific range. Subsequently, each sample is assigned to the respective group. (i.e., intervals of 10 rpm are created);
- For any other given feature, the mean and the standard deviation in each cluster of data are calculated and let mean to be m and standard deviation to be s;
- The arbitrarily chosen factor k is multiplied by the standard deviation, setting an outlier threshold (in our case, a value of 3 was selected, which corresponds to a loose outlier-detection criterion);
- The inequality for each data point following Equation (1) determines if the given sample is an outlier.

#### 2.2.3. Data Smoothing

## 3. Feed-Forward Neural Networks

#### 3.1. Model Description

#### Components of the ANNs

- Use specifically applicable: rules of thumb utilized in deep-learning models/applications to initialize the process with abstract values for the hyperparameters and insert these parameters to the neural network, given that tuning every hyperparameter is overwhelming not only for the computer, but also for the human behind it trying to interpret the results. Hyperparameters regarded as more significant than others whose values have been defined in this step will be reviewed in the next step.
- After experimentation, a grid search was deemed computationally affordable, hence it was preferred over the randomized search in the optimal configuration inquiry process. The hyperparameters being examined through the implementation of grid search algorithms are model-structure related, such as the number of layers, and the number of neurons per layer, and activation functions. Assess performance on the training and, more importantly, on the test set. Determine the network’s structure and activation function.
- Having determined the structure of the model and the selection of the activation function, the value of the learning rate is reviewed. It is reduced until the loss function ceases to fluctuate during the training process.
- After convergence of the loss function is observed, the batch size is also a hyperparameter that could further improve overall performance.

#### 3.2. Model Application

^{−3}, leading us to claim that the optimal learning rate that should be used for this particular model is situated between 10

^{−3}and 10

^{−5}. Errors and standard deviation of errors obtained from the implementation with different learning rates are illustrated in Figure 11.

^{−4}seemed to be an ideal value for our model’s learning rate. Once we determined the values of the learning parameters, the batch size and its effect on the error on the test set was scrutinized (Figure 12).

## 4. Recurrent Neural Networks

#### 4.1. Model Description

#### Components of LSTM RNNs

- Forget Gate: conditionally decides what information to discard from the block.
- Input Gate: conditionally decides which values from the input to update the memory state.
- Output Gate: conditionally decides what to output based on input and the memory of the block.

- The number of epochs: Increasing the number of epochs improves performance, since more iterations give the model the chance to learn better. However, this eventually results in overfitting.
- The batch size: This defines the frequency of weight updates. Increasing the batch size lowers the computational cost, enhances the algorithm’s efficiency, and intensifies the error function variance.
- The number of hidden layers and the number of units in each layer: Both of these enhance the model’s accuracy, but eventually lead to memorizing extravagantly intricate patterns in training, resulting in overfitting.
- Learning rate of the optimizer: This is a critical parameter that dictates the learning pace of the algorithm, but also entails the risk of utter learning failure if chosen to be too large in order to expedite the process.
- Dropout rate: This is responsible for dealing with the problem of overfitting; nevertheless, the pitfall of underfitting is imminent if chosen to be larger than the model would require.
- The number of samples inputted: LSTM models need to receive n number of samples each time to generate one outputted value. A large number of inputted samples provides a more holistic view of our model; however, there is a risk of overfitting.

- Use specific applicable rules of thumb known in the deep-learning community to initialize the process with abstract values for the hyperparameters.
- Implement the grid or the randomized search repeatedly until hypermeters converge to a specific configuration.
- Insert these parameters into the neural network.
- Assess performance on the training, and more importantly, on the evaluation data.
- Manually carry out further refinements to the hyperparameters if deemed necessary.

#### 4.2. Model Application

- The large inertia characterizing a tanker ship, resulting in slow rates of change among most features, and
- A large amount of data for both training and testing our algorithm.

^{−4}and 256, respectively. However, those two values were reviewed for each model to determine the learning-rate and batch-size values that would optimize our models’ performance.

^{−4}to 10

^{−3}, and despite the mild volatility in this range of values, convergence can be observed. Observing Figure 16, a learning rate of 5 × 10

^{−4}was selected.

## 5. Application of the Ship Power Prediction

#### 5.1. Power Prediction Using ANN

#### 5.2. Power Prediction Using RNN

## 6. Model Comparison

_{meas}= P

_{pr}, for every sample.

^{2,}was employed.

- (i)
- The mean of the observed data:$$\overline{y}=\frac{1}{n}{{\displaystyle \sum}}_{i=1}^{n}{y}_{i},$$
- (ii)
- The sum of the squared difference of every sample to the mean (proportional to the variance):$$\mathrm{SS}={{\displaystyle \sum}}_{i=1}^{n}\left({y}_{i}-\overline{y}\right){}^{2}$$
- (iii)
- The sum of squares of residuals:$${\mathrm{SS}}_{\mathrm{res}}={{\displaystyle \sum}}_{i=1}^{n}\left({y}_{i}-\widehat{{y}_{i}}\right){}^{2}$$

^{2}metric scores from both network types across all segments. The slope deviation from 1 of each line is also depicted. We defined slope deviation as:

## 7. Discussion of Use-Case Applications

## 8. Conclusions

- (1)
- LSTM models appeared to be suitable for propulsion power prediction, providing both great sensitivity and precision.
- (2)
- One of the main reasons of the accuracy decline of the FFNNs in the out-of-sample predictions was the stochasticity of the signal; namely, the sudden and steep changes. Therefore, applying more filters could smoothen the signal, resulting in performance and accuracy enhancement.
- (3)
- The method was tested through real case applications onboard, with an aim to determine the performance degradation of the vessel. It was shown that the suggested methodology incorporates diagnostic and prognostic features in a flexible framework, enabling technical and economic monitoring of past, current, and forecasted states.

## Author Contributions

## Funding

## Conflicts of Interest

## References

- Themelis, N.; Spandonidis, C.; Giordamlis, C. Data acquisition and processing techniques for a novel Integrated Ship Energy and Maintenance Management System. In Proceedings of the 17th International Congress of the International Maritime Association of the Mediterranean 2019 (IMAM-2019), Varna, Bulgaria, 9–11 September 2019; pp. 306–315. [Google Scholar]
- Jeon, M.; Noh, Y.; Shin, Y.; Lim, O.K.; Lee, I.; Cho, D. Prediction of ship fuel consumption by using an artificial neural network. J. Mech. Sci. Technol.
**2018**, 32, 5785–5796. [Google Scholar] [CrossRef] - Gkerekos, C.; Lazakis, I.; Theotokatos, G. Machine learning models for predicting ship main engine Fuel Oil Consumption: A comparative study. Ocean Eng.
**2019**, 188, 106282. [Google Scholar] [CrossRef] - Trodden, D.G.; Murphy, A.J.; Pazouki, K.; Sargeant, J. Fuel usage data analysis for efficient shipping operations. Ocean Eng.
**2015**, 110, 75–84. [Google Scholar] [CrossRef][Green Version] - Yuan, Z.; Liu, J.; Liu, Y.; Zhang, Q.; Liu, R.W. A multi-task analysis and modeling paradigm using LSTM for multi-source monitoring data of inland vessels. Ocean Eng.
**2020**, 213, 107604. [Google Scholar] [CrossRef] - Brandsæter, A.; Vanem, E. Ship speed prediction based on full-scale sensor measurements of shaft thrust and environmental conditions. Ocean Eng.
**2018**, 162, 316–330. [Google Scholar] [CrossRef][Green Version] - Wang, S.; Ji, B.; Zhao, J.; Liu, W.; Xu, T. Predicting ship fuel consumption based on LASSO regression. Transp. Res. Part D Transp. Environ.
**2018**, 65, 817–824. [Google Scholar] [CrossRef] - Perera, L.P.; Mo, B.; Soares, G. Machine intelligence for energy-efficient ships: A big data solution. In Maritime Engineering and Technology III; Soares, C.G., Santos, T.A., Eds.; CRC Press: Boca Raton, FL, USA, 2016; Volume 1, pp. 143–150. [Google Scholar]
- Coraddu, A.; Oneto, L.; Baldi, F.; Anguita, D. Vessels fuel consumption forecast and trim optimization: A data analytics perspective. Ocean Eng.
**2017**, 130, 351–370. [Google Scholar] [CrossRef] - Spandonidis, C.; Giordamlis, C. Data-centric Operations in Oil & Gas Industry by the Use of 5G Mobile Networks and Industrial Internet of Things (IIoT). In Proceedings of the Thirteenth International Conference on Digital Telecommunications 2018 (ICDT 2018), Athens, Greece, 22–26 April 2018; pp. 1–5. [Google Scholar]
- Karagiannidis, P.; Themelis, N.; Zaraphonitis, G.; Spandonidis, C.; Giordamlis, C. Ship Fuel Consumption Prediction using Artificial Neural Networks. In Proceedings of the Annual meeting of marine technology conference proceedings, Athens, Greece, 26 November 2019; pp. 46–51. [Google Scholar]
- Theodoropoulos, P.; Spandonidis, C.C.; Themelis, N.; Giordamlis, C.; Fassois, S. Monitoring of a ship’s energy efficiency based on Artificial Neural Networks and Innovative KPIs. In Proceedings of the Annual Meeting of Marine Technology Conference Proceedings, Athens, Greece, 2 December 2020; pp. 87–103. [Google Scholar]
- Spandonidis, C.; Theodoropoulos, P.; Giordamlis, C. Combined multi-layered big data and responsible AI techniques for enhanced decision support in Shipping. In Proceedings of the 2020 International Conference on Decision Aid Sciences and Application (DASA), Sakheer, Bahrain, 8–9 November 2020; pp. 669–673. [Google Scholar]

**Figure 3.**

**Left:**Histogram of the speed over ground for rpm values in the range (50,60).

**Right:**Histogram of the power generated by the main engine for rpm values in the same range.

**Figure 4.**Outlier removal with respect to main engine rpm; the left diagram corresponds to the power feature, and the right one to the speed over ground feature. Samples to be removed appear in orange, and the remainder of the data points appear in blue.

**Figure 6.**Real (blue line) vs smoothed (red line) signal for a 5-min (

**up**), 10-min (

**middle**), and 15-min (

**down**) averaging window.

**Figure 7.**

**Left:**Comparison of the models concerning the mean percentage error each one achieved on the test set.

**Right:**Comparison of the models concerning the time required for the hyperparameter tuning procedure.

**Figure 8.**

**Left:**Mean percentage error concerning the number of nodes per layer for a different number of hidden layers when applying the linear activation function.

**Right:**Standard deviation of the percentage error concerning the number of nodes per layer for a different number of hidden layers when applying the linear activation function.

**Figure 9.**

**Left:**Mean percentage error concerning the number of nodes per layer for a different number of hidden layers when applying the ReLU activation function.

**Right:**Standard deviation of the percentage error concerning the number of nodes per layer for a different number of hidden layers when applying the ReLU activation function.

**Figure 11.**

**Left:**Mean percentage error concerning the learning rate of the optimizer.

**Right:**Standard deviation of the percentage error concerning the learning rate of the optimizer.

**Figure 12.**

**Left:**Mean percentage error concerning the batch size.

**Right:**Standard deviation of the percentage error concerning the batch size.

**Figure 13.**

**Left:**Minimum error on the validation set with respect to the number of units in the LSTM cell for various windows of inputted data points when applying the ReLU activation function.

**Right:**Average error on the validation set among the 10 best-performing epochs with respect to the number of units in the LSTM cell for various windows of inputted data points when applying the ReLU activation function.

**Figure 14.**

**Left:**Minimum error on the validation set with respect to the number of units in the LSTM cell for various windows of inputted data points when applying the sigmoid activation function.

**Right:**Average error on the validation set among the 10 best-performing epochs with respect to the number of units in the LSTM cell for various windows of inputted data points when applying the sigmoid activation function.

**Figure 16.**

**Left:**Minimum error on the validation set concerning the learning rate.

**Right:**Average among the 10 epochs with the lower yielded error concerning the learning rate.

**Figure 17.**

**Left:**Minimum error of the validation set concerning the batch size.

**Right:**Average among the 10 epochs with the lower yielded error concerning the batch size.

**Figure 20.**

**Left:**Propagation of the mean percentage error across the out-of-sample subset.

**Right:**Propagation of the standard deviation of the error across the out-of-sample subset.

**Figure 21.**Measured values surrounded by the respective 95% confidence interval that stemmed from the estimated values.

**Figure 24.**

**Left:**Propagation of the mean percentage error across the out-of-sample subset.

**Right:**Propagation of the standard deviation of the error across the out-of-sample subset.

**Figure 25.**Measured values of the power generated by the main engine surrounded by the respective 95% confidence interval that stemmed from the estimated values from the RNN models.

**Figure 26.**

**Left:**Scatter plot and optimal-fit line of the first segment of the FFNN model.

**Right:**Scatter plot and optimal-fit line of the first segment of the RNN model.

**Figure 27.**

**Left:**Optimal-fit lines for all segments of the FFNN model.

**Right:**Optimal-fit line for all segments of the RNN model.

**Figure 28.**

**Left:**R

^{2}score comparison between the types of networks.

**Right:**Slope deviation comparison between the types of networks.

**Figure 29.**Detecting the anomaly of degradation of the ship through the power increase deviation indicator.

Parameter | Value |
---|---|

Length | 264.00 (m) |

Breadth (molded) | 50.00 (m) |

Depth (molded) | 23.10 (m) |

Engine’s MCR | 18,666 (kW) @ 91 RPM |

Signal Source | Parameters |
---|---|

Navigational parameters | GPS, speed log, gyro compass, rudder angle, echo sounder, anemometer, inclinometer (pitching–rolling), drafts, weather data |

Main engine (ME) | Torquemeter (shaft RPM, torque, power), ME fuel rack position %, ME FO pressure, ME scavenge air pressure, ME T/C RPM |

Fuel-oil (FO) monitoring | ME FO consumption, diesel generator (DG) FO consumption |

Alarm monitoring system | Indicative: DGs’ lube oil (LO) inlet pressure, cylinders’ exhaust gas outlet temperature, turbocharger (TC) LO pressure, TC inlet gas temperature, TC inlet gas temperature |

Parameter | Value |
---|---|

Dropout rate | 0 |

Kernel weight initializer | Uniform |

Number of epochs | 250 |

Parameter | Value |
---|---|

Number of layers | (10, 12, 15, 18, 20, 22) |

Network’s width | (50, 100, 150, 200, 300, 400) |

Activation function | (ReLU, Linear) |

Parameter | Value |
---|---|

Learning rate | 2.5 × 10^{−4} |

Number of layers | 20 |

Maximum number of nodes per layer | 400 |

Batch size | 128 |

Epochs | 200 |

Activation function | ReLU |

Kernel initializer | Uniform |

Dropout rate | 0 |

Parameter | Value |
---|---|

Epochs | 200 |

Dropout rate | 0 |

Number of features inserted | 1 |

Number of LSTM cells | 1 |

Parameter | Value |
---|---|

Number of LSTM units in LSTM cell | (100, 200, 500, 1000) |

Window of inserted data into the cell | (3, 5, 7, 10, 15, 20) |

Activation function | ReLU, Sigmoid |

Parameter | Value |
---|---|

Learning rate | 5 × 10^{−4} |

Number of inputted samples | 15 |

Number of units in LSTM cell | 1000 |

Batch size | 128 |

Epochs | 200 |

Activation function | ReLU |

Features inserted | 1 |

Dropout rate | 0 |

Parameter | FFNN | RNN |
---|---|---|

Power | 7 | 0.58 |

Fuel Oil Consumption * | 5.8 | 0.58 |

**Table 10.**Comparison of the training duration and the time * required for each network type to complete.

Parameter | FFNN | RNN |
---|---|---|

Training duration (min) | 16.67 | 57 |

Time required for predictions (min) | 0.4 | 36 |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Theodoropoulos, P.; Spandonidis, C.C.; Themelis, N.; Giordamlis, C.; Fassois, S. Evaluation of Different Deep-Learning Models for the Prediction of a Ship’s Propulsion Power. *J. Mar. Sci. Eng.* **2021**, *9*, 116.
https://doi.org/10.3390/jmse9020116

**AMA Style**

Theodoropoulos P, Spandonidis CC, Themelis N, Giordamlis C, Fassois S. Evaluation of Different Deep-Learning Models for the Prediction of a Ship’s Propulsion Power. *Journal of Marine Science and Engineering*. 2021; 9(2):116.
https://doi.org/10.3390/jmse9020116

**Chicago/Turabian Style**

Theodoropoulos, Panayiotis, Christos C. Spandonidis, Nikos Themelis, Christos Giordamlis, and Spilios Fassois. 2021. "Evaluation of Different Deep-Learning Models for the Prediction of a Ship’s Propulsion Power" *Journal of Marine Science and Engineering* 9, no. 2: 116.
https://doi.org/10.3390/jmse9020116