Next Article in Journal
Reconstruction of the Vertical Distribution of Suspended Sediment Using Support Vector Machines
Previous Article in Journal
Design of a Sensor–Actuator Integrated Flexible Pectoral Fin for Bioinspired Manta Robots
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Prediction of Ship Estimated Time of Arrival Based on BO-CNN-LSTM Model

1
Navigation College, Jimei University, Xiamen 361021, China
2
Division of Business and Hospitality Management, College of Professional and Continuing Education, The Hong Kong Polytechnic University, Hong Kong, China
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2026, 14(8), 694; https://doi.org/10.3390/jmse14080694
Submission received: 22 February 2026 / Revised: 6 April 2026 / Accepted: 7 April 2026 / Published: 8 April 2026
(This article belongs to the Section Ocean Engineering)

Abstract

Accurate prediction of a ship’s Estimated Time of Arrival (ETA) is of great significance for port scheduling, logistics management, and navigation safety. Traditional ETA prediction approaches often rely on manual experience for parameter tuning, which tends to be inefficient and susceptible to subjective factors. To address this issue and improve prediction accuracy, this study proposes a hybrid modeling framework, integrating Bayesian Optimization (BO), Convolutional Neural Networks (CNNs), and Long Short-Term Memory (LSTM) networks. In this approach, Automatic Identification System (AIS) data is leveraged to predict the total voyage duration before departure, thereby deriving the vessel’s ETA. The model, referred to as BO-CNN-LSTM, utilizes BO for automatic hyperparameter tuning, employs CNN for extracting local features, and applies LSTM network to capture temporal dependencies. The model is developed using a dataset of 32,972 distinct voyage records, among which 23,947 are retained as valid samples after data cleaning. Pearson correlation analysis is conducted to select key input variables, including navigation speed, ship type, sailing distance, and deadweight tonnage. Additionally, sailing distance is processed using the Ramer–Douglas–Peucker algorithm. Experimental evaluation indicates that the BO-CNN-LSTM model achieves a coefficient of determination of 0.987, along with a mean absolute error and root mean square error of 6.078 and 8.730, respectively. These results significantly outperform comparison models such as CNN, LSTM, CNN-LSTM, random forest, AdaBoost, and Elman neural networks. Overall, this study validates the effectiveness and superiority of the proposed BO-CNN-LSTM model in ship ETA prediction, providing an efficient and effective prediction solution for intelligent maritime transportation systems.

1. Introduction

In recent decades, the international shipping industry has experienced rapid growth and expansion. A 2019 report by the United Nations Conference on Trade and Development (UNCTAD) stated that maritime transport serves as the backbone of global trade and international supply chains, accounting for approximately 90% of worldwide goods movement [1,2,3,4]. Within global supply chains, multi-modal transportation has gained increasing importance. Due to their advantages of large capacity and relatively low transportation costs, ships play an essential role in facilitating international multi-modal logistics networks. In real-world maritime cargo operations, certain types of vessels, such as container ships and passenger ships (i.e., ferries), often follow fixed scheduled itineraries and operate along predefined routes between ports [5]. However, for these scheduled voyages, the approximate Estimated Time of Arrival (ETA) provided in daily noon reports may differ significantly from the actual time of arrival [6]. This discrepancy becomes more pronounced with longer planned voyages. Consequently, the limited accuracy in ETA hinders efficient coordination between vessels and port operations, ultimately reducing overall supply chain efficiency [7]. For non-scheduled (tramp) shipping operations, accurately predicting a vessel’s arrival time is also critical as it can improve transport efficiency, and reduce associated waiting times and economic costs.
To achieve more accurate prediction for vessel ETAs, this study utilizes Automatic Identification System (AIS) data. Key features are carefully selected from the dataset and employed as input for a novel Bayesian optimization (BO)–Convolutional Neural Network (CNN)–Long Short-Term Memory (LSTM) model, referred as BO-CNN-LSTM. Moreover, dynamic temporal and local spatial features are extracted from the selected data. The model predicts the total voyage duration prior to the start of the voyage. By combining the known departure time with the predicted total voyage duration, accurate ETA predictions are achieved.
This study presents three major breakthroughs. First, we develop a vessel arrival time prediction approach based on the BO-CNN-LSTM model. Within the current research context, this method effectively integrates spatiotemporal feature learning extraction with automated hyperparameter optimization, thereby addressing the limitations of traditional prediction techniques and significantly improving ETA prediction accuracy. Second, extensive empirical experiments are conducted using large-scale AIS datasets covering a wide range of vessel types. The experimental results demonstrate that the model maintains stable prediction accuracy under the given conditions, even across varying vessel types and voyage distances. Third, a comparative analysis is performed against other mainstream prediction models under identical experimental conditions. The results not only confirm the superiority and practical value of the proposed model for the ETA prediction task but also provide reliable technical insights and methodological guidance for intelligent shipping systems.
To conclude, this paper is structured into six sections. Section 2 reviews the literature related to vessel arrival time prediction, whereas Section 3 details the processing of AIS data and the models applied in this study. Section 4 presents experimental setup and comparative validation. Moreover, Section 5 analyses and discusses the experimental results. Finally, Section 6 concludes this work, summarizes the findings, and proposes some directions for future work.

2. Literature Review

2.1. Research on Utilizing AIS Data in Maritime Transportation

AIS data has been extensively utilized across various domains, including maritime safety management, trajectory analysis, emission assessment, and waterway planning. These applications demonstrate the multi-dimensional value of Ais across diverse scenarios.
In autonomous ship berthing and trajectory planning, Sun et al. [8] proposed an AIS-based data mining method based on density peak clustering to identify critical berthing locations and vessel velocity characteristics. This method is further integrated with a decelerated line-of-sight guidance mechanism and an optimal control allocation method, achieving trajectory planning and control optimization during automated berthing operations.
For vessel trajectory identification and anomalous behavior monitoring, Gu et al. [9] proposed a trajectory identification method that examines trajectory variations near the boundaries of camera arrays. Missing trajectories are supplemented using AIS data to ensure continuity. The fused video surveillance with AIS data approach enables real-time vessel localization within navigable waterways and provides timely deviation alerts, enhancing both real-time capability and accuracy of waterway monitoring.
Regarding ship maneuvering behavior and decision support, Tian et al. [10] employed historical AIS data to identify vessel maneuvers by identifying variations in motion patterns. On this basis, they further designed logical rules to distinguish motion changes caused by external environmental factors. This approach improves the understanding of vessel decision-making processes and contributes to improved maritime traffic safety.
In terms of waterway safety and navigable depth reliability assessment, Li et al. [11] applied AIS data using Singapore Port as a case study to calculate and evaluate the reliability of navigable depths. The findings provided data support and decision-making basis for the rational utilization of port depth resources and the enhancement of navigation safety management.
For environmental emission assessment and policy recommendations, Qi et al. [12] employed a bottom-up methodology based on AIS trajectory data spanning from 2016 to 2022 to conduct a detailed estimation of black carbon emissions generated by Arctic shipping activities. The study incorporated different sailing states, vessel types, fuel categories, and spatial distribution characteristics. Furthermore, the study projected emission trends for 40 vessel categories for the years 2030 and 2050, thereby offering scientific evidence to support environmental governance and policy formulation in Arctic shipping operations.
In summary, AIS data serves a crucial supporting role across multiple maritime-related fields, such as vessel behavior analysis, navigation safety monitoring, and environmental impact assessment. Their interdisciplinary applications continue to expand, providing both a solid research foundation and practical pathways for the multi-dimensional advancement of intelligent shipping systems.

2.2. Research Status of Predictive Models in Other Fields

Prediction models are not limited to maritime applications; instead, they demonstrate broad applicability and strong adaptability across various interdisciplinary fields, including land transportation, natural disaster early warning, and air transport. Research in these areas provides valuable insights for handling complex spatiotemporal prediction problems by introducing innovative model architectures and optimization strategies. These methodologies hold significant implications for vessel ETA prediction.
In the field of land-freight transit-time prediction, Li et al. [13] proposed a temporal–attribute–spatial triple-space coordinating framework to improve the prediction accuracy of truck ETAs. This framework integrated three core modules: a temporal learning module designed to capture sequential dependencies, an attribute extraction module that transforms continuous features into structured embeddings, and a spatial fusion module to integrate spatiotemporal information. Using GPS trajectory data as the experimental basis, this study validated the effectiveness of multi-module collaboration in enhancing prediction stability within complex land transportation scenarios.
Regarding natural disaster early-warning systems, Song et al. [14] utilizes artificial neural networks to support tsunami warning research. Their approach employed the Kriging method to analyze the spatial distribution patterns of maximum tsunami wave heights in harbor areas. Model performance is evaluated using Root Mean Square Error (RMSE) and the coefficient of determination (R2). The proposed neural network was able to simultaneously predict the maximum tsunami wave height and arrival time, providing crucial temporal and intensity information for coastal disaster prevention and decision-making processes.
In the field of earthquake monitoring, Moghadam et al. [15] developed a machine learning approach based on histogram gradient boosting to rapidly estimate earthquake location and magnitude using seismic wave arrival time patterns. Characterized by its high accuracy and computational efficiency, this method was validated using real-world data from the 2012 aftershock sequence in the Ahar-Varzeghan region of northwestern Iran, showcasing the practical applicability of machine learning techniques in real-time geophysical monitoring.
In air transport management, Cao et al. [16] investigated the impact of flight arrival time uncertainty on airport operations. They developed a deep learning framework combining CNN, LSTM, and attention mechanisms to achieve high-precision prediction of actual flight arrival times. To further optimize resource allocation, the proposed study constructed a dual-objective gate assignment model and employed an epsilon-constraint-based branch-and-price algorithm to obtain non-dominated Pareto-optimal solutions. This approach effectively reduced potential conflicts caused by arrival time fluctuations without relying on traditional robust optimization methods, offering a new solution approach for dynamic resource scheduling problems.
In summary, applications of predictive modeling, ranging from truck ETA and tsunami warnings to earthquake localization and flight arrival prediction, demonstrate a continuous expansion of their functional scope. As prediction models continue to expand, their performance is enhanced by integrating technologies such as temporal analysis, spatial modeling, feature engineering, and multi-objective optimization. These interdisciplinary research efforts further reveal that designing hybrid model architectures, combined with domain knowledge and introducing intelligent optimization algorithms for parameter or decision tuning, is effective to complex spatiotemporal prediction challenges. These insights provide valuable methodological guidance and technical references for further development of vessel ETA prediction models in the maritime field. In particular, they emphasize the importance of multi-source data fusion, dynamic environment adaptation, and coordinated resource optimization.

2.3. Current Research Status in Predicting Vessel ETA

Over time, a wide range of models has been developed and to predict various models for vessel arrival times. Early research focused on statistical and rule-based methods. For instance, Du et al. [17] used chi-square tests to analyze the arrival patterns of ships at container terminals, providing a statistical foundation for understanding port traffic behavior. Moreover, Jung et al. [18] proposed a framework combining Bayesian inference along with Metropolis–Hastings algorithm to estimate vessel destination and arrival time, marking an important step toward probabilistic modeling. In addition, Alessandrini et al. [19] directly utilized historical AIS trajectory data and employed Gaussian process methods for ETA estimation, highlighting the importance of data-driven techniques. Furthermore, Kim et al. [20] focused on integrating historical and real-time information to enable early detection of vessel delays. Meanwhile, Pani et al. [21] adopted a qualitative perspective to analyze the underlying causes of early and late vessel arrivals at container terminals. With advances in computational techniques, research focus has shifted toward more complex models. Therefore, Nguyen et al. [22] innovatively mapped vessel trajectories onto spatial grids and employed a sequence-to-sequence architecture to predict both destination and ETA, demonstrating the effectiveness of deep learning for spatiotemporal sequence problems. In parallel, Ref. [23] focused on real-time prediction, developing a system for destination and ETA prediction in maritime traffic, whereas Park et al. [24] built an ETA prediction system based on path-finding algorithms. The foundational explanation of the Metropolis–Hastings algorithm, introduced by Chib et al. [25], provided theoretical support for related Bayesian computational methods. Research into deep learning continues with El Mekkaoui et al. [26] who directly used AIS data for vessel ETA prediction. Added to that, Hao et al. [27] applied Recurrent Neural Networks (RNNs) for non-parametric modeling of ship maneuvering motions, showcasing the capability of deep learning in describing vessel dynamics. Finally, Bourzak et al. [28] systematically reviewed the application of deep learning in ETA prediction and conducted a case study using the St. Lawrence River, indicating that research in this field is gradually becoming more systematic and practically oriented.
Recent studies have further advanced this field. For instance, Evmides et al. [29] proposed a machine learning framework for ETA prediction, validating its effectiveness through comparative analysis across multiple machine learning and deep learning models. Notably, their study employed a route-based train and test split strategy to reduce bias and overfitting. Moreover, El Mekkaoui et al. [30] evaluated the predictive capability of deep learning models for ETA from a bulk port perspective, integrating AIS data with other information sources. The developed deep-learning-based ETA prediction approach closely aligns with the BO-CNN-LSTM framework proposed in this study. Furthermore, Baghizadeh et al. [31] utilized publicly available AIS data to analyze vessel operations in offshore wind farm maintenance, focusing on reconstructing maintenance vessel trajectories and identifying inefficiencies in routing and scheduling. Their work demonstrated that AIS-based data analysis can reveal operational issues, including suboptimal maintenance sequencing, redundant travel, increased fuel consumption, and extended service duration, thereby demonstrating full broad applicability of AIS-driven predictive analytics. Moreover, Saber et al. [32] developed a hybrid machine learning approach for high-precision ETA prediction within port environments. The robustness of this model is explicitly verified through cross-validation experiments. Their study provides important insights for port-based vessel ETA prediction research, highlighting that evaluation design plays a critical role in verifying model reliability.
Although accurate vessel ETA prediction is essential for efficient port operations and coordinated multimodal transport systems, existing approaches still face numerous challenges. Traditional approaches often rely on statistical models or rule-based empirical estimations, which often struggle to fully exploit the complex spatiotemporal features within vessel voyage data, thereby restricting prediction accuracy. Furthermore, vessel navigation is influenced by various dynamic factors, such as speed variations, route deviations, and meteorological and sea conditions, making ETA prediction a highly complex and nonlinear problem in high-dimensional space.
To address the aforementioned issues, this paper proposes a hybrid prediction model integrating BO, CNN, and LSTM networks, termed as BO-CNN-LSTM, aiming to achieve accurate end-to-end ETA prediction. This model extracts local spatial features from AIS data using CNN, captures temporal dynamics through LSTM, and further introduces the BO method to automatically tune model’s hyperparameters. This approach enhances model construction efficiency while ensuring prediction accuracy.

3. Data and Methodology

3.1. Data Sources and Cleaning

The dataset for this study primarily consisted of AIS navigation data collected from vessels, with particular emphasis on tramp general cargo vessels, liquid bulk carriers, and dry bulk carriers. In addition to this core dataset, supplementary data from passenger ships and special-purpose vessels are also incorporated to enhance data diversity. The initial dataset comprised 32,972 distinct voyage records. The spatial distribution of selected voyages is partially illustrated in Figure 1.
To improve data quality and accuracy and to mitigate potential prediction errors arising from data inconsistencies, a cleaning procedure was applied to AIS data. This procedure targeted voyages containing missing or anomalous data points. The cleaning steps consisted of the following procedure:
  • For each identified voyage, the corresponding AIS messages were chronologically sorted based on their timestamps;
  • The spatial coordinates (longitude and latitude) of the first and last AIS messages in each voyage were examined. If the final position was not near to the intended destination port but instead located at sea, the voyage was considered incomplete and subsequently removed from the dataset;
  • Time intervals between consecutive AIS messages within a voyage were analyzed. Any interval exceeding 600 s (10 min) was considered abnormal, and therefore removed along with the associated data points and the entire voyage, depending on the context.
Following the data cleaning process, a refined dataset of 23,947 complete and valid voyage records was obtained for subsequent model training and analysis.

3.2. Data Preprocessing

The ETA of a vessel is influenced by a wide range of factors. These include not only intrinsic parameters derived from AIS data, such as Maritime Mobile Service Identity (MMSI), timestamp, ETA, destination ID, destination port, navigational status, longitude, latitude, region ID, speed, rate of turn, course over ground, true heading, ship dimensions, draft, and vessel type, but also external environmental conditions like wind speed and wave direction. Table 1 summarizes input feature selections from recent studies focusing on vessel arrival time prediction.
Feature engineering refers to the process of leveraging domain knowledge to construct effective feature representations that capture the underlying characteristics of a problem, thus enhancing model predictive capability [40]. Based on a review of feature selection in other studies related to vessel arrival time prediction, these selected representations primarily encompass two aspects: vessel AIS data and meteorological factors (such as wind speed and wave height) during the voyage. However, since sailing speed and draft can directly reflect meteorological conditions, and the addition of meteorological parameters does not significantly impact model performance [35], this study excludes such factors from the input features. Instead, vessel AIS data were utilized for model development.
Among the various data fields within AIS, not all exhibit strong relevance to vessel arrival time, or more specifically, the voyage duration given a fixed departure time. Therefore, a Pearson correlation coefficient matrix was constructed to evaluate the relationship between individual input variables and voyage duration. The results are presented in Figure 2, where the Pearson correlation coefficient (r) indicates the linear relationship between two variables; an r value of 1 denotes a perfect positive correlation, −1 highlights a perfect negative correlation, and 0 indicates no linear relationship.
Figure 2 reveals that the absolute values of the Pearson correlation coefficients between voyage duration and seven input variables, namely, deadweight tonnage, sailing speed, draft, vessel type, sailing distance, port of origin, and port of destination, are greater than 0.1, whereas the coefficients for all other variables are less than 0.1. The ranking of these coefficients, when considering the magnitude, is as follows: sailing distance > sailing speed > vessel type > deadweight tonnage > port of origin > port of destination > draft. Based on this analysis, four variables were selected as input features for prediction: sailing speed, vessel type, sailing distance, and deadweight tonnage.
The route distance was calculated using Ramer–Douglas–Peucker (RDP) algorithm. In this work, the vessel AIS data was divided into a training set and a test set using a ratio of 9:1. The training set was used for the model training, while the test set was employed to evaluate the predictive performance and generalization ability of the model. This design ensures that data in the training set and the test set are completely separated to avoid leakage. During model training, the vessel speed was represented by the average speed over the entire voyage, and the Huber regression model was utilized to analyze and process the vessel speed in the training set. Through this approach, the proxy speeds corresponding to different vessel types were generated, as shown in Table 2. In the testing phase, the speeds of different vessel types were tested using their respective proxy speeds.

3.3. Bayesian Optimization

Compared with conventional approaches that rely on manual tuning of hyperparameters for prediction models, achieving excellent predictive accuracy often requires substantial expertise and considerable time investment. BO, as a sequential model-based optimization method, provides an efficient alternative for tuning hyperparameters in computationally expensive black-box models. The objective of BO is shown in Equation (1).
x = arg min x X   f ( x )
where x : the hyperparameter configuration (L2 regularization coefficient, number of hidden units, and optimal initial learning rate); f ( x ) : the objective function of the black-box model (in this paper, the objective function of the CNN-LSTM model).
BO intelligently determines the next sampling point through the coordinated use of two core components: a surrogate model (typically a Gaussian Process (GP)) for the objective function, which provides a probabilistic estimate of the function’s behavior and is iteratively updated, and an acquisition function, which is evaluated based on this probabilistic model and used to identify the next sampling point most likely to optimize the objective function [41].
As shown in Figure 3, this study employed a GP model as a surrogate for the CNN-LSTM framework to reduce computational complexity and training cost. Thus, a probability surrogate for the objective function was constructed, as expressed in Equation (2).
f ( x ) G P ( m ( x ) , k ( x , x ) )
where m ( x ) : the mean function; k ( x , x ) : the covariance kernel function.
Consequently, an acquisition function (Expected Improvement (EI)) was employed to balance the weight between exploration and exploitation. After multiple iterations, the optimal hyperparameters were ultimately derived. The specific formulation of EI is expressed in Equations (3)–(5).
I ( x ) = max ( 0 , f t f ( x ) )
α E I ( x ) = ( f t μ t ( x ) ) Φ ( Z ) + σ t ( x ) ϕ ( Z ) , σ t ( x ) > 0 0 , σ t ( x ) = 0
Z = f t μ t ( x ) σ t ( x )
where f t : min{ f ( x 1 , , x n ) } denotes the optimal value of the objective function (which, in this study, is the minimum value); I ( x ) : improvement over the current optimum; μ t ( x ) , σ t ( x ) : the mean and standard deviation predicted by the GP, respectively; Φ ( Z ) ,   ϕ ( Z ) : the cumulative distribution function and the probability density function of the standard normal distribution, respectively.

3.4. Convolutional Neural Network

A CNN is a class of deep learning models specifically designed for processing and analyzing structured data, including visual and sequential data [42]. The CNN architecture primarily consists of four components: an input layer, a fully connected layer, multiple alternating convolutional layers, and pooling layers. As illustrated in Figure 4, the CNN model utilizes convolutional layers to identify patterns and variations within the input data and extract features from the 1D AIS data provided by the input layer. Subsequently, pooling layers are employed as a form of nonlinear down-sampling to further refine features extracted from the convolutional layer outputs. While preserving the invariance of essential data features, the most critical ones are extracted while reducing the dimensionality of the convolutional layer outputs. Through multiple iterative extraction steps, the fully connected layer ultimately acts as a classifier, integrating and mapping the extracted features derived from the pooling layers.

3.5. Long Short-Term Memory

RNNs are widely used for modeling sequential data; however, they often encounter difficulties in capturing long-term dependencies [43]. When dealing with extended sequences, RNNs suffer from the loss of information during iterative updates, a phenomenon associated with the vanishing and exploding gradient problems. As an improved variant of RNNs, LSTM networks effectively address the challenges of long-term sequence prediction. As illustrated in Figure 5, a standard LSTM unit is built upon two core states: the cell state and the hidden state. Both states are controlled using three gates: the forget gate, the input gate, and the output gate.
The formula for the forget gate in LSTM is defined as follows:
f t = σ ( W f [ h t 1 , x t ] + b f )
Moreover, the formulas for the input gate are expressed as follows:
i t = σ ( W i [ h t 1 , x t ] + b i )
C ~ t = tanh ( W c [ h t 1 , x t ] + b c )
C t = f t · C t 1 + i t · C ~ t
Finally, the formulas for the LSTM output gate are defined as follows:
o t = σ ( W o [ h t 1 , x t ] + b o )
h t = o t · tanh ( C t )
where F t : forget gate output; W f ,   W c , W i , W o : weight matrices; ht−1: the hidden state from the previous time step; X t : input at the current time step; b f , b c , b i , b o : bias terms; σ: the sigmoid activation function.

3.6. BO-CNN-LSTM

Referring to Figure 6, the AIS data are first processed using BO, during which a surrogate model is constructed based on GPs. Compared with manual hyperparameter tuning, this approach reduces the time required for parameter adjustment when handling large-scale datasets, thereby improving the ETA prediction. Consequently, the CNN model exhibits strong capabilities in extracting local features from AIS data and is thus employed first to derive high-level local feature representations. LSTM excels at handling data with long-term dependencies, and the features extracted by the CNN component inherently exhibit temporal correlations. Therefore, the LSTM layer is applied to further process these features, capturing long-term dependencies within the AIS data and refining the features derived from the CNN. Finally, the fully connected layer of CNN maps the high-level features extracted by the BO-CNN-LSTM framework to the final output of vessel voyage duration. This approach aims to enhance prediction accuracy, avoid overfitting, and improve the model’s generalizability. The formulas for the BO-CNN-LSTM model are presented below:
θ = arg min θ Θ L ( θ )
Z = C o n v 1 D ( X , W c ) + b c
A = E L U ( B a t c h N o r m ( Z ) )
F = A v g P o o l ( A )
h t = L S T M ( f t , h t 1 , c t 1 )
h T = L S T M l a s t ( F )
y ^ = W o D r o p o u t ( h T ) + b o
where θ = N , α , λ : the number of hidden units, learning rate, and regularization coefficient, respectively; W c : the weight of the convolution kernel; b c : the bias term of the convolutional layer; f t : the features extracted by the CNN model; W o : the weight matrix of the output layer; b o : the bias term of the output layer; Dropout: a regularization technique involving random deactivation (to prevent overfitting).

3.7. Ramer–Douglas–Peucker Algorithm

Traditionally, the Douglas–Peucker (DP) algorithm has been widely used for calculating sailing distance. However, the conventional DP algorithm may exhibit reduced efficiency when handling large volumes of latitude and longitude data. Additionally, positional deviations in AIS-recorded coordinates may overestimate sailing distance.
In this study, an optimized variant of the DP algorithm, the RDP algorithm, was employed for sailing distance estimation. This algorithm is a popular trajectory simplification method [44], designed to eliminate redundant latitude and longitude points while preserving the essential shape of the sailing route. The RDP algorithm systematically calculates the perpendicular distance from each data point to a proposed simplified line segment. Points whose distances exceed a predefined threshold are retained, whereas those within the threshold are discarded. This process ensures that only significant course changes are preserved, thereby improving both computational efficiency and the accuracy of sailing distance calculations. The core principle of the RDP algorithm is expressed using the following formula:
d = y n   y 0 x i     x n   x 0 y i   +   x n y 0   y n x 0 y n   y 0 2 + x n   x 0 2
The ship’s trajectory is represented as an ordered sequence of AIS data points, where each latitude–longitude pair corresponds to a geographic coordinate, denoted as (P1, P2, P3, …, Pn), with Pi = (xi, yi). Furthermore, a predetermined threshold ϵ = 100 m is set to determine the range within which latitude–longitude points can be omitted.
The RDP algorithm first approximates the trajectory into a line segment connecting the start point P0 to the end point Pn. For any intermediate point Pi (1 < i < n), the perpendicular distance d to this line segment is calculated, as defined in Equation (19). If d > ϵ, this geographic coordinate Pi is retained; otherwise, it is omitted.
Subsequently, point Pm (1 ≤ mn) is identified, having the maximum distance from the line segment connecting P0 and Pn. If such a point is found, the algorithm splits the original trajectory P0 to Pn into two sub-trajectories: P0 to Pm and Pm to Pn. This recursive process continuous, repeatedly identifying the farthest point and splitting the trajectory until no points satisfy d > ϵ.
The final simplified set of coordinate points is Pf = {P0, Pi, Pj, …, Pn}, where each retained point satisfies d > ϵ. Once the simplification process is achieved, the sailing distance is calculated using Haversine distance formula, expressed as follows:
T o t a l   D i s t a n c e =   i = 1 k d H ( P i 1 , P i )
Finally, the spherical distance between two geographic coordinates is calculated as follows:
d H P 1 , P 2 = 2 r . a r c s i n s i n 2 ϕ 2 + cos ( ϕ 1 ) cos ( ϕ 2 ) s i n 2 λ 2
where k : the number of coordinate points after simplification; ϕ : the latitude of the coordinate; λ : the longitude of the coordinate; r : the radius of the Earth. Schematic diagram of the RDP algorithm is presented in Figure 7.

4. Experiments

4.1. Parameter Settings

To clearly define the prediction setting, the experiments in this study are designed to estimate the total voyage duration prior to departure. The vessel’s ETA is then calculated by combining the predicted voyage duration with the known departure time, which is readily available from AIS data. This ensures that the model uses only information available at the time of prediction, aligning with real-world operational conditions.
The model was implemented using MATLAB R2024b. Training and testing experiments were conducted using 23,947 voyage AIS records. AIS records were randomly split into training and test subsets using a 9:1 ratio to ensure a fair comparison across all models. To evaluate the stability and repeatability of the proposed approach, the test set was further partitioned into five subsets, each containing 480 voyage records, for repeated experiments. Parameter Settings for the ship ETA prediction model based on AIS data are presented in Table 3.

4.2. Evaluation Criteria

To analyze and evaluate the predictive performance of the proposed model, three metrics were selected for assessment and analysis: R2, MAE, and RMSE.
First, R2 is commonly used to assess the fitting of a regression model regarding the data. It is defined as follows:
S S r e s = i = 1 n ( y i y ^ i ) 2
S S t o t = i = 1 n ( y i y ¯ ) 2
R 2 = 1 S S r e s / S S t o t
An R2 value close to 1 indicates that the model fits well the data, meaning more data points can be accurately predicted by the model. An R2 value close to 0 shows that the model has not learned meaningful information from the dataset and essentially does not fit the data. A negative R2 value indicates that the model is underfitting the dataset.
Where S S r e s : the sum of squared residuals; S S t o t : the total sum of squares; y ¯ : the mean value; y ^ : the predicted value from the model.
Moreover, MAE represents the average of the absolute errors between the predicted values and the true ones. It is commonly used to indicate the prediction accuracy of a model. It is expressed as follows:
M A E = 1 n i = 1 n y ^ i y i
Finally, RMSE is employed to quantify the deviation between predicted and true values. In forecasting, the RMSE is highly sensitive to outliers within the data, increasing its application to measure this deviation. It is mathematically formulated as follows:
R M S E = 1 n i = 1 n y ^ i y i
Both the MAE and RMSE indicate smaller errors when their values are closer to 0.

4.3. Ablation Experiment

To systematically assess the effectiveness of each core component within the proposed BO-CNN-LSTM model, this study conducts a series of ablation experiments. Specifically, six model structures are evaluated: the complete BO-CNN-LSTM, the standalone CNN, the standalone LSTM, the CNN with only Bayesian Optimization (BO-CNN), the LSTM with only Bayesian Optimization (BO-LSTM), and the basic CNN-LSTM combination without Bayesian Optimization. All models were evaluated under identical experimental conditions using the same five test subsets, recording R2, RMSE, and MAE. Percentage error distribution plots and fitting effect plots were also generated to provide a comprehensive comparison. The results are presented in Table 4, Table 5 and Table 6 and Figure 8 and Figure 9.
As shown in Table 4, Table 5 and Table 6, the BO-CNN-LSTM model consistently achieved the highest R2 values across all five repeated experiments, reaching a maximum of 0.98792, with minimal fluctuation between runs. This indicates that its goodness-of-fit and stability are superior compared to the other ablation variants. In contrast, the CNN-LSTM model without BO had R2 values ranging from 0.976 to 0.980, slightly lower than those of BO-CNN-LSTM, suggesting that the introduction of BO enhances the performance of the CNN-LSTM architecture.
Moreover, the BO-LSTM model exhibited a sharp drop in R2 to 0.60536 in the fourth experiment, accompanied with a dramatic increase in RMSE and MAE, reaching 44.9242 and 25.3151, respectively, therefore demonstrating considerable instability. This anomalous fluctuation indicates that, when only the LSTM is optimized with BO (without incorporating the CNN for feature extraction), the model becomes highly sensitive to data distribution, making it prone to local optima and overfitting, thus compromising prediction stability.
In addition, the standalone CNN model generally yielded relatively low R2 values (ranging from 0.931 to 0.970), while its RMSE and MAE are significantly higher than those of any model incorporating LSTM. This confirms the limitations of a purely convolutional architecture in capturing temporal modeling tasks. Although the standalone LSTM model outperformed CNN, its R2 value remained lower than that of BO-CNN-LSTM, and its error metrics were also higher, indicating that a single recurrent structure struggles to fully leverage local spatial features present in AIS data.
Finally, the BO-CNN model achieved R2 values below 0.945 across all five experiments, with RMSE ranging between 16 and 18 and MAE varying in the interval [10,11]. This highlights the noticeable performance worse than both BO-LSTM and BO-CNN-LSTM. This further demonstrates that, in the absence of temporal modeling capability, local features extracted by only CNN are insufficient for accurately predicting total voyage duration.
Moreover, Figure 8a presents the percentage error distributions of the models. The BO-CNN-LSTM model exhibits the most concentrated error distribution, with a median error close to zero and very few outliers, indicating high consistency and reliability in its predictions.
As shown in Figure 8b, the CNN model exhibits a wide error distribution, with a considerable number of samples showing large positive and negative deviations, particularly a noticeable bias toward overestimation. This behavior is closely related to the lack of temporal modeling capability in CNN, limiting its ability to capture temporal variations.
As shown in Figure 8c, the LSTM model shows an improved error distribution compared to CNN. However, it still preserves several high-deviation samples, suggesting that a single temporal modeling structure has limited adaptability when dealing with complex trajectory patterns.
As shown in Figure 8d, the BO-CNN model displays a clear bimodal error distribution. While some samples have errors concentrated near zero, others show significant deviation, reflecting that the CNN model, optimized solely by BO, yields accurate predictions for some voyages but suffers from systematic bias in others.
As shown in Figure 8e, the BO-LSTM model exhibits an extreme error distribution in the fourth experiment, with a significantly expanded error range, further highlighting its instability under certain conditions.
Although the CNN-LSTM model shows an improved error distribution compared to standalone CNN or LSTM (Figure 8f), its results remain relatively dispersed when compared to the BO-CNN-LSTM framework. This indicates that, in the absence of hyperparameter optimization, the straightforward combination of CNN and LSTM struggles to achieve optimal synergistic performance.
Figure 9a illustrates the fitting between predicted and actual values for each model. The BO-CNN-LSTM model shows scatter points that are densely concentrated along the ideal diagonal line, exhibiting a strong linear relationship with the absence of obvious systematic bias across the entire voyage range. This indicates that the model has good generalization capability within the current dataset.
As shown in Figure 9b, the CNN model exhibits a pronounced “plateau effect” in the short-voyage range, where predicted values are clustered within a narrow lower band and fail to reflect the continuous variation in actual voyage duration. This behavior highlights the limitations of a purely convolutional structure in modeling temporal dynamics.
As shown in Figure 9c, although the LSTM model improves fitting performance in the short-voyage range, a degree of dispersion and bias in the medium- to long-voyage range still exists.
As shown in Figure 9d, the BO-CNN model displays poor fitting performance, with widely scattered points and several significant outliers where predictions deviate substantially from actual values. This suggests that relying solely on CNN and BO optimization is insufficient to capture temporal dependencies in voyage prediction tasks.
As shown in Figure 9e, the BO-LSTM model performs well in most experimental cases; however, a significant number of outliers appear in the fourth experiment, further confirming instability under certain conditions.
As shown in Figure 9f, the CNN-LSTM model achieves improved fitting compared to standalone CNN or LSTM models. Nevertheless, the scatter points remain relatively dispersed, with noticeable deviations in some samples. This observation indicates that, without the introduction of BO for hyperparameter tuning, the combined model still has room for improvement in both prediction accuracy and stability.

4.4. Comparative Analysis of Model Performance

To further evaluate the stability and generalization capability of the BO-CNN-LSTM model, this study conducted repeated experiments using five different test subsets, each containing 480 voyage records. For each subset, the R2, RMSE, and MAE metrics were recorded and compared. The results are shown in Table 7, Table 8 and Table 9, and their corresponding percentage error distributions and fitting effects are illustrated in Figure 10 and Figure 11.
As shown in Table 7, Table 8 and Table 9, the BO-CNN-LSTM model achieved the highest R2 values across the five repeated experiments, reaching a maximum of 0.98792. At the same time, it produces the lowest RMSE and MAE values, demonstrating both stable and excellent predictive performance. In contrast, the AdaBoost model exhibited R2 values consistently below 0.34, accompanied by significantly larger error metrics and substantial fluctuations, indicating poor suitability for this task. Moreover, Bi-GRU and Elman models showed intermediate performance, but their R2 values and error metrics varied notably across experiments, reflecting insufficient stability. Finally, the Random Forest (RF) model maintained a stable R2 of approximately 0.97; however, its MAE and RMSE remained higher than those of the BO-CNN-LSTM model, indicating a gap in both goodness-of-fit and error minimization.
Figure 10 illustrates the percentage error distributions of the models across repeated experiments. The BO-CNN-LSTM model exhibited the most concentrated error distribution, with the smallest median error and minimal outliers, demonstrating strong prediction consistency and robustness. In contrast, AdaBoost showed a highly dispersed error distribution, with predictions generally underestimating the actual values. Although some overestimations were observed, the errors were large, indicating extremely poor prediction accuracy. The Bi-GRU and Elman models performed reasonably in some experiments, their error distributions remained wide, with certain experiments showing substantial deviations, suggesting weak adaptability to data variations. Among the five compared models, the error distribution of the RF model was the most concentrated, indicating that its robustness is relatively poor.
Figure 11 illustrates the relationship between predicted and actual values for each model. The scatter points of the BO-CNN-LSTM model were densely concentrated along the ideal diagonal line, indicating a significant linear relationship and achieving excellent fitting performance under the current experimental conditions. In contrast, the other models exhibit varying degrees of systematic bias across different voyage ranges. For instance, the RF model tended to underestimate voyage durations in medium- to long-distance scenarios. Meanwhile, the Elman and Bi-GRU models displayed substantial fluctuations in short-distance voyage predictions, further confirming their limitations in modeling complex spatiotemporal characteristics.

5. Discussion

5.1. Discussion on Ablation Experiments

The ablation study results reveal the distinct contributions of each core component under the current experimental conditions. The combination of CNN and LSTM forms the foundation for achieving high prediction accuracy, as the non-optimized CNN-LSTM outperforms both standalone CNN and LSTM models. This finding confirms the effectiveness of combining local feature extraction with temporal dependency modeling. The standalone CNN model exhibits a pronounced “plateau effect” in short-voyage predictions, indicating its limited capability in capturing temporal dynamics. Although the standalone LSTM model outperforms improved performance compared with CNN, it still shows bias in medium- to long-voyage predictions. This suggests that a purely recurrent structure is insufficient to fully exploit the local spatial information in AIS data.
Under the current experimental conditions, the introduction of BO significantly enhances both the accuracy and stability of the model. In this study, the BO-CNN-LSTM framework consistently achieves higher R2 values and lower error metrics compared with the non-optimized CNN-LSTM across repeated experiments. The experimental results indicate that BO effectively explores the hyperparameter space to optimize the hyperparameters of CNN-LSTM by enabling the model to effectively approach high-performance hyperparameter configurations within a limited number of iterations. This architecture mitigates issues related to overfitting and unstable gradients.
Under the current experimental conditions, applying BO to single-structure models reveals its clear limitations. Across all experiments, the BO-CNN framework achieves R2 values below 0.945, and its error distribution exhibits a bimodal characteristic. This indicates that relying solely on local features—even with hyperparameter optimization—is insufficient for prediction tasks that require strong temporal modeling capabilities. Although the BO-LSTM model performs well in most experiments, it shows a noticeable performance drop in one test subset, accompanied by a significantly widened error distribution. This behavior suggests that, compared with BO-CNN-LSTM, optimizing only the recurrent structure without incorporating CNN-based feature extraction increases sensitivity to data distribution, thereby reducing model stability. Overall, under the experimental conditions of this study, the joint integration of CNN and LSTM enables the full advantages of BO to be more effectively realized.

5.2. Discussion on Comparative Experiments

In this study, the results of the comparative experiments further confirmed the superiority of the BO-CNN-LSTM model over traditional machine learning techniques, such as RF and AdaBoost, as well as shallow neural networks, including Elman and Bi-GRU. Among the traditional machine learning models, RF achieved relatively stable R2 values; however, it exhibited systematic underestimation in medium- to long-voyage predictions, reflecting the inherent limitations of RF in temporal regression tasks. In contrast, AdaBoost performed the worst across all evaluation metrics, with a highly dispersed error distribution and strong sensitivity to outliers. Shallow neural networks, including Elman and Bi-GRU, outperformed traditional models but showed considerable fluctuations across repeated experiments, indicating that such models struggled to consistently captured complex temporal features in the absence of hyperparameter optimization.
Across the five repeated experiments, BO-CNN-LSTM exhibited the smallest fluctuations in R2, RMSE, and MAE, demonstrating its good stability. In terms of error distribution, the model showed the most concentrated error distribution, with a median error close to zero and limited outliers. The fitting plot indicated that its scatter points were densely distributed around the ideal diagonal, with no systematic bias across either short- or long-distance voyages. These findings suggest that the proposed model achieved consistent predictive performance on the test set, with clear advantages relatively prominent among the compared baseline models.

6. Conclusions

This study proposes a BO-CNN-LSTM model for predicting voyage duration prior to departure, thereby enabling accurate vessel ETA prediction. The model aims to address key limitations of conventional ETA prediction methods, including their reliance on manual hyperparameter tuning and their limited ability to capture complex spatiotemporal features. Within this framework, the CNN module is employed to extract local spatial features from voyage data, while the LSTM module captures long-term temporal dependencies. In addition, BO automates the hyperparameter optimization process, collectively enhancing prediction accuracy, robustness, and model development efficiency.
The main findings of this study are as follows:
(1) The synergy between CNN and LSTM forms the foundation for high prediction accuracy.
(2) Under the current experimental conditions, BO significantly enhances both the accuracy and stability of the model.
(3) In this study, applying BO to single-structure models, including BO-CNN and BO-LSTM yields limited or unstable performance, whereas the joint optimization of CNN and LSTM achieves better results under the given experimental conditions.
Moreover, this study develops a structured framework that integrates spatiotemporal feature extraction with automated hyperparameter optimization, enabling end-to-end learning of the mapping between input features and voyage duration under the current experimental conditions. In addition, repeated experimental validation across multiple test subsets provides a reliable basis for assessing model stability.
Despite the promising results achieved, several limitations must be acknowledged within this work. They are the following:
(1) The current model relies solely on historical AIS data and does not incorporate real-time dynamic factors, such as meteorological conditions, sea state, or port congestion, which may affect prediction reliability when unexpected disruptions occur;
(2) As the hyperparameter space dimensionality increases, BO computational overhead grows accordingly, necessitating a trade-off between efficiency and performance in practical deployment.
Future research directions include integrating multi-source information (e.g., real-time weather data and port operational status), designing a lightweight architecture with attention mechanisms to optimize feature fusion and reduce deployment costs, and developing incremental learning mechanisms to enable continuous model updating. Furthermore, exploring graph neural networks for modeling port–vessel interactions, as well as conducting cross-regional transferability studies, may help the development of the practical application of ETA prediction systems.

Author Contributions

Conceptualization, Q.C., Z.Y., J.G., Y.-y.L. and P.Z.; Methodology, Q.C., Z.Y. and J.G.; Writing—original draft, Q.C. and Z.Y.; Writing—review & editing, Y.-y.L. and P.Z.; Supervision, P.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Fujian Provincial Department of Human Resources and Social Security under the Funding Program for the Training of High-level Talents and Young Excellent Talents for the Year of 2024–2025, a Fujian Province Philosophy and Social Sciences Basic Theory Research Project-Key Project (FJ2024MGCA025), a Jimei University Maritime Silk Road National Research Institute Fund Project (JMGB202433), and a Jimei University Postgraduate Teaching and Research Project (C151123), Educational Science Project of Fujian Provincial Educational Science Planning Office (FJJKB25072), National Social Science Fund of China (23&ZD138).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. UNCTAD. Review of Maritime Transport 2019; United Nations Conference on Trade and Development: Geneva, Switzerland, 2019. [Google Scholar]
  2. Lin, C.C.; Chang, C.H. Evaluating employee’s perception toward the promotion of safety marketing at ports. Int. J. Shipp. Transp. Logist. 2021, 13, 275–299. [Google Scholar] [CrossRef]
  3. Luo, M.F.; Shin, S.H. Half-century research developments in maritime accidents: Future directions. Accid. Anal. Prev. 2019, 123, 448–460. [Google Scholar] [CrossRef]
  4. Yan, R.; Wang, S.H.; Du, Y.Q. Development of a two-stage ship fuel consumption prediction and reduction model for a dry bulk ship. Transp. Res. Part E Logist. Transp. Rev. 2020, 138, 101930. [Google Scholar] [CrossRef]
  5. Ksciuk, J.; Kuhlemann, S.; Tierney, K.; Koberstein, A. Uncertainty in maritime ship routing and scheduling: A literature review. Eur. J. Oper. Res. 2023, 308, 499–524. [Google Scholar] [CrossRef]
  6. Chu, Z.; Yan, R.; Wang, S. Evaluation and prediction of punctuality of vessel arrival at port: A case study of Hong Kong. Marit. Policy Manag. 2023, 51, 1096–1124. [Google Scholar] [CrossRef]
  7. Wang, Y.; Zhang, X.; Guo, Y. Predicting estimated time of arrival for ships: A frequency-based approach considering met-ocean factors. Ocean Eng. 2025, 337, 121873. [Google Scholar] [CrossRef]
  8. Sun, T.; Yin, Y.; Liu, C. AIS-data-guided trajectory planning and control for ship autonomous berthing. Ocean Eng. 2026, 350, 124270. [Google Scholar] [CrossRef]
  9. Gu, S.; Zhang, J. Integrated camera array and AIS approach for ship trajectory monitoring and bridge collision prevention. Eng. Struct. 2025, 349, 121867. [Google Scholar] [CrossRef]
  10. Tian, W.; Zhu, M.; Han, P.; Kjerstad, Ø.K.; Li, G.; Zhang, H. Leveraging AIS data for maneuver detection and knowledge extraction during ship encounter. Ocean Eng. 2025, 343, 122973. [Google Scholar] [CrossRef]
  11. Li, Z.; Zhuang, Q.; Chu, H.; Gao, S.; Cheng, L.; Cheng, J.; Duan, Z.; Xue, Q.; Yao, Y.; Chen, D.; et al. Assessment of navigable water depth reliability in the Port of Singapore on the basis of AIS. Appl. Ocean Res. 2025, 158, 104574. [Google Scholar] [CrossRef]
  12. Qi, X.; Li, Z.; Li, S.; Song, C.; Zhou, Y.; Li, J. Spatiotemporal evolution and prediction of Arctic shipping black carbon emissions based on AIS data 2016 to 2022. Mar. Pollut. Bull. 2025, 219, 118318. [Google Scholar] [CrossRef]
  13. Li, M.; Chen, J.; Jiang, G.; Li, F.; Zhang, R.; Gong, S.; Lv, Z. TAS-TsC: A data-driven framework for estimating time of arrival using temporal-attribute-spatial tri-space coordination of truck trajectories. Appl. Soft Comput. 2025, 178, 113214. [Google Scholar] [CrossRef]
  14. Song, M.-J.; Cho, Y.-S. Early warning for maximum tsunami heights and arrival time based on an artificial neural network. Coast. Eng. 2024, 192, 104563. [Google Scholar] [CrossRef]
  15. Soltani Moghadam, S.; Ansari, A.; Etemadsaeed, L.; Tatar, M.; Mahmoodabadi, M. Earthquake location and magnitude estimation using seismic arrival times pattern and gradient boosted decision trees. Artif. Intell. Geosci. 2025, 6, 100149. [Google Scholar] [CrossRef]
  16. Cao, F.; Tang, T.; Gao, Y.; Michler, O.; Schultz, M. Predicting flight arrival times with deep learning: A strategy for minimizing potential conflicts in gate assignment. Transp. Res. Part C Emerg. Technol. 2024, 169, 104866. [Google Scholar] [CrossRef]
  17. Du, P.C.; Wang, W.Y.; Tang, G.L.; Guo, Z.J. Study on the ship arrival pattern of container terminals. Appl. Mech. Mater. 2013, 409–410, 1197–1203. [Google Scholar] [CrossRef]
  18. Jung, H.; Lee, K.W.; Choi, J.H.; Cho, E.S. Bayesian estimation of vessel destination and arrival times. In Proceedings of the 12th ACM International Conference on Distributed and Event-Based Systems (DEBS ‘18); Association for Computing Machinery: New York, NY, USA, 2018; pp. 195–197. [Google Scholar] [CrossRef]
  19. Alessandrini, A.; Mazzarella, F.; Vespe, M. Estimated time of arrival using historical vessel tracking data. IEEE Trans. Intell. Transp. Syst. 2019, 20, 7–15. [Google Scholar] [CrossRef]
  20. Kim, S.; Kim, H.; Park, Y. Early detection of vessel delays using combined historical and real-time information. J. Oper. Res. Soc. 2017, 68, 182–191. [Google Scholar] [CrossRef]
  21. Pani, C.; Vanelslander, T.; Fancello, G.; Cannas, M. Prediction of late/early arrivals in container terminals: A qualitative approach. Eur. J. Transp. Infrastruct. Res. 2015, 15, 536–550. [Google Scholar] [CrossRef]
  22. Nguyen, D.D.; Van, C.L.; Ali, M.I. Vessel destination and arrival time prediction with sequence-to-sequence models over spatial grid. In Proceedings of the 12th ACM International Conference on Distributed and Event-Based Systems (DEBS ‘18); Association for Computing Machinery: New York, NY, USA, 2018; pp. 217–220. [Google Scholar] [CrossRef]
  23. Bodunov, O.; Schmidt, F.; Martin, A.; Brito, A.; Fetzer, C. Real-time destination and ETA prediction for maritime traffic. In Proceedings of the 12th ACM International Conference on Distributed and Event-Based Systems; ACM: Hamilton, NZ, USA, 2018; pp. 198–201. [Google Scholar] [CrossRef]
  24. Park, K.; Sim, S.; Bae, H. Vessel estimated time of arrival prediction system based on a path-finding algorithm. Marit. Transp. Res. 2021, 2, 100012. [Google Scholar] [CrossRef]
  25. Chib, S.; Greenberg, E. Understanding the Metropolis-Hastings algorithm. Am. Stat. 1995, 49, 327–335. [Google Scholar] [CrossRef]
  26. El Mekkaoui, S.; Benabbou, L.; Berrado, A. Predicting ships estimated time of arrival based on AIS data. In Proceedings of the 13th International Conference on Intelligent Systems: Theories and Applications; ACM: Rabat, Morocco, 2020; pp. 1–6. [Google Scholar] [CrossRef]
  27. Hao, L.; Han, Y.; Shi, C.; Pan, Z. Recurrent neural networks for nonparametric modeling of ship maneuvering motion. Int. J. Nav. Archit. Ocean Eng. 2022, 14, 100436. [Google Scholar] [CrossRef]
  28. Bourzak, I.; El Mekkaoui, S.; Berrado, A.; Caron, S.; Benabbou, L. Deep learning approaches for vessel estimated time of arrival prediction: A case study on the Saint Lawrence River. In Proceedings of the 2023 14th International Conference on Intelligent Systems: Theories and Applications (SITA); IEEE: Piscataway, NJ, USA, 2023; pp. 1–7. [Google Scholar] [CrossRef]
  29. Evmides, T.; Gkonis, P.; Anagnostopoulos, I. Enhancing Prediction Accuracy of Vessel Arrival Times Using Machine Learning. J. Mar. Sci. Eng. 2024, 12, 1362. [Google Scholar] [CrossRef]
  30. El Mekkaoui, S.; Benabbou, L.; Berrado, A. Deep learning models for vessel’s ETA prediction: Bulk ports perspective. Flex. Serv. Manuf. J. 2023, 3, 1012–1036. [Google Scholar] [CrossRef]
  31. Baghizadeh, K.; Pisinger, D.; Røpke, S. Using AIS Data to Analyze and Optimize Vessel Operations for Offshore Wind Farm Maintenance. In Innovations in Sustainable MaritimeTechnology—IMAM 2025; Springer: Cham, Switzerland, 2025; Volume 312, pp. 456–472. [Google Scholar] [CrossRef]
  32. Saber, S.M.; Thowai, K.Z.; Rahman, M.A.; Hassan, M.M.; Bari, A.M.; Raihan, A. High-accuracy prediction of vessels’ estimated time of arrival in seaports: A hybrid machine learning approach. Ocean Eng. 2025, 298, 117234. [Google Scholar] [CrossRef]
  33. Lei, J.; Chu, Z.; Wu, Y.; Liu, X.; Luo, M.; He, W.; Liu, C. Predicting vessel arrival times on inland waterways: A tree-based stacking approach. Ocean Eng. 2024, 294, 116838. [Google Scholar] [CrossRef]
  34. Du, L.; Xiu, J.; Banda, O.A.V.; Zhang, M.; Zhang, F.; Wen, Y. A segmented ETA prediction method for inland vessels based on improved DEKM and CNN-LSTM algorithms. Ocean Eng. 2025, 342, 122694. [Google Scholar] [CrossRef]
  35. Valero, C.I.; Ivancos Pla, E.; Vaño, R.; Garro, E.; Boronat, F.; Palau, C.E. Design and development of an AIoT architecture for introducing a vessel ETA cognitive service in a legacy port management solution. Sensors 2021, 21, 8133. [Google Scholar] [CrossRef]
  36. Xu, X.; Liu, C.; Li, J.; Miao, Y. Trajectory clustering for SVR-based time of arrival estimation. Ocean Eng. 2022, 259, 111930. [Google Scholar] [CrossRef]
  37. Jahn, C.; Scheidweiler, T. Port call optimization by estimating ships’ time of arrival. In Proceedings of the 6th International Conference on Dynamics in Logistics (LDIC 2018); Springer International Publishing: Bremen, Germany, 2018; pp. 172–177. [Google Scholar] [CrossRef]
  38. Pan, N.; Ding, Y.; Fu, J.; Wang, J.; Zheng, H. Research on ship arrival law based on route matching and deep learning. J. Phys. Conf. Ser. 2021, 1952, 022023. [Google Scholar] [CrossRef]
  39. Zhang, H.; Guo, J.N.; Guo, S.S. An optimization study on dynamic berth allocation based on vessel arrival time prediction. Ocean Eng. 2025, 332, 121321. [Google Scholar] [CrossRef]
  40. Dong, G.; Liu, H. Feature Engineering for Machine Learning and Data Analytics, 1st ed.; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar] [CrossRef]
  41. Garnett, R. Bayesian Optimization; Cambridge University Press: Cambridge, UK, 2023. [Google Scholar]
  42. Cai, J.C.; Lian, F.; Yang, Z.Z. Research on short-term ship traffic flow prediction in port waters based on disaggregated model and long short-term memory (LSTM) model. Navig. China 2025, 48, 77–83. [Google Scholar] [CrossRef]
  43. Zhang, M.; Sheng, Y.; Tian, N.; Liu, W.; Wang, H.; Zhu, L.; Xu, Q. Research and application of traffic forecasting in customer service center based on ARIMA model and LSTM neural network model. J. Phys. Conf. Ser. 2021, 1981, 106535. [Google Scholar] [CrossRef]
  44. Ramer, U.; Peucker, T.K. An iterative procedure for the polygonal approximation of plane curves. Comput. Graph. Image Process. 1972, 1, 244–256. [Google Scholar] [CrossRef]
Figure 1. Route distribution of selected voyages.
Figure 1. Route distribution of selected voyages.
Jmse 14 00694 g001
Figure 2. Pearson correlation coefficient matrix of AIS data.
Figure 2. Pearson correlation coefficient matrix of AIS data.
Jmse 14 00694 g002
Figure 3. Flowchart of the Bayesian optimization structure.
Figure 3. Flowchart of the Bayesian optimization structure.
Jmse 14 00694 g003
Figure 4. Schematic diagram of the CNN model structure.
Figure 4. Schematic diagram of the CNN model structure.
Jmse 14 00694 g004
Figure 5. Unit structure of the LSTM model.
Figure 5. Unit structure of the LSTM model.
Jmse 14 00694 g005
Figure 6. BO-CNN-LSTM model architecture.
Figure 6. BO-CNN-LSTM model architecture.
Jmse 14 00694 g006
Figure 7. Schematic diagram of the RDP algorithm.
Figure 7. Schematic diagram of the RDP algorithm.
Jmse 14 00694 g007
Figure 8. Percentage error plots for different models: (a) BO-CNN-LSTM model; (b) CNN model; (c) LSTM model; (d) BO-CNN model; (e) BO-LSTM model; and (f) CNN-LSTM model.
Figure 8. Percentage error plots for different models: (a) BO-CNN-LSTM model; (b) CNN model; (c) LSTM model; (d) BO-CNN model; (e) BO-LSTM model; and (f) CNN-LSTM model.
Jmse 14 00694 g008
Figure 9. Fitting plots for different models: (a) BO-CNN-LSTM model; (b) CNN model; (c) LSTM model; (d) BO-CNN model; (e) BO-LSTM model; and (f) CNN-LSTM model.
Figure 9. Fitting plots for different models: (a) BO-CNN-LSTM model; (b) CNN model; (c) LSTM model; (d) BO-CNN model; (e) BO-LSTM model; and (f) CNN-LSTM model.
Jmse 14 00694 g009
Figure 10. Percentage error plots for different models: (a) BO-CNN-LSTM model; (b) Adaboost model; (c) Bi-GRU model; (d) Elman model; and (e) RF model.
Figure 10. Percentage error plots for different models: (a) BO-CNN-LSTM model; (b) Adaboost model; (c) Bi-GRU model; (d) Elman model; and (e) RF model.
Jmse 14 00694 g010
Figure 11. Fitting plots for different models: (a) BO-CNN-LSTM model; (b) Adaboost model; (c) Bi-GRU model; (d) Elman model; and (e) RF model.
Figure 11. Fitting plots for different models: (a) BO-CNN-LSTM model; (b) Adaboost model; (c) Bi-GRU model; (d) Elman model; and (e) RF model.
Jmse 14 00694 g011
Table 1. Selection of input features in recent studies on vessel arrival time prediction.
Table 1. Selection of input features in recent studies on vessel arrival time prediction.
Input FeatureReferences
Length, width, vessel type, vessel sailing direction, temporal features (voyage start time), voyage duration, latitude, longitude, sailing distance, draft, number of vessels in the region, average regional speed.Lei et al. [33]
Du et al. [34]
Sailing distance, longitude, latitude, speed over ground (SOG), course over ground (COG), heading, draft.Valero et al. [35]
Xu et al. [36]
Wind speed, wind direction, current vessel speed, current wind direction, water level, wave height, area indicator.Jahn et al. [37]
MMSI, received time, longitude, latitude, SOG, COG.Pan et al. [38]
Longitude, latitude, COG, SOG, average speed, sailing distance, vessel length, vessel width.Zhang et al. [39]
Table 2. Proxy speeds corresponding to different vessel types.
Table 2. Proxy speeds corresponding to different vessel types.
Vessel TypeProxy Speeds
Dry bulk carrier7.8
Tramp general cargo vessels7.89
Liquid bulk carriers8.6
special-purpose vessels10.38
container ships13.59
Table 3. Parameter Settings for the ship ETA prediction model based on AIS data.
Table 3. Parameter Settings for the ship ETA prediction model based on AIS data.
ModelParameter Settings
BO-CNN-LSTMGradient threshold: 1Dataset partitioning for training: 0.9
Batch size: 1Learning rate drop period: 400
Cell state value: 6Number of Bayesian Optimization iterations: 15
Maximum training epoch: 500Learning rate drop factor: 0.1
Hyperparameters of Bayesian Optimization: L2 regularization coefficient, number of hidden units set and optimal initial learning rate.
CNN-LSTMDataset partitioning for training: 0.9L2 regularization coefficient: 1 × 10−4
Learning rate drop period: 200Number of hidden units: 50
Maximum training epoch: 300Learning rate drop factor: 0.1
Initial learning rate: 0.001Gradient threshold: 1
CNNDataset partitioning for training: 0.9Convolution kernel size: 3 × 1
Stride: [1, 1]Initial learning rate: 0.01
Convolutional layers: 2Convolutional kernels: 8
Batch size: 32Learning rate drop period: 200
Pooling layers: 2Pooling window size: [2, 1]
Maximum training epoch: 300Learning rate drop factor: 0.1
BO-CNNDataset partitioning for training: 0.9Convolution kernel size: 3 × 1
Stride: [1, 1]Convolutional kernels: 8
Convolutional layers: 2Learning rate drop period: 200
Batch size: 32Pooling window size: [2, 1]
Pooling layers: 2Learning rate drop factor: 0.1
Maximum training epoch: 300Number of Bayesian Optimization iterations: 15
Hyperparameters of Bayesian Optimization: L2 regularization coefficient, number of hidden units set and optimal initial learning rate.
LSTMDataset partitioning for training: 0.9Learning rate drop period: 150
Number of hidden units: 4Initial learning rate: 0.01
Maximum training epoch: 200Learning rate drop factor: 0.1
BO-LSTMDataset partitioning for training: 0.9Learning rate drop period: 150
Maximum training epoch: 200Learning rate drop factor: 0.1
Number of Bayesian Optimization iterations: 15
Hyperparameters of Bayesian Optimization: L2 regularization coefficient, number of hidden units set and optimal initial learning rate.
Random forestDataset partitioning for training: 0.9Number of decision trees: 100
Minimum leaf size: 3
ElmanDataset partitioning for training: 0.9Training target error: 1 × 10−5
Hidden layer: 10 neuronsTraining epochs: 1000
AdaBoostM2 algorithmNumber of bins: 100
Base learner: decision treeDataset partitioning for training: 0.9
Bi-GRUDataset partitioning for training: 0.9Learning rate drop factor: 0.1
Maximum training epoch: 300Learning rate drop period: 200
Initial learning rate: 0.01
Table 4. Comparison table of R2 for different models.
Table 4. Comparison table of R2 for different models.
Prediction Model1st R22nd R23rd R24th R25th R2
BO-CNN-LSTM0.984460.983080.982330.982760.98792
CNN0.931090.970430.966220.963250.95417
LSTM0.977010.974350.976710.977180.97686
BO-CNN0.943440.942460.941250.944840.94176
BO-LSTM0.985740.984580.981250.605360.98292
CNN-LSTM0.978510.980380.978640.976270.97977
Table 5. Comparison table of RMSE for different models.
Table 5. Comparison table of RMSE for different models.
Prediction Model1st RMSE2nd RMSE3rd RMSE4th RMSE5th RMSE
BO-CNN-LSTM9.18849.40269.40210.09918.7302
CNN20.194613.229414.139714.746819.5426
LSTM11.559111.878011.955811.833411.2865
BO-CNN17.845717.084016.269916.947716.7593
BO-LSTM9.26639.282110.381344.92429.8292
CNN-LSTM10.788810.414010.337411.975310.8097
Table 6. Comparison table of MAE for different models.
Table 6. Comparison table of MAE for different models.
Prediction Model1st MAE2nd MAE3rd MAE4th MAE5th MAE
BO-CNN-LSTM6.69136.49686.41727.14436.0783
CNN12.19467.58098.44018.137014.0874
LSTM7.76187.80528.03497.95598.0027
BO-CNN10.666310.500610.287810.564710.5473
BO-LSTM6.78446.50347.340625.31516.7706
CNN-LSTM7.31577.26496.99898.40577.6253
Table 7. Comparison table of R2 for different models.
Table 7. Comparison table of R2 for different models.
Prediction Model1st R22nd R23rd R24th R25th R2
BO-CNN-LSTM0.984460.983080.982330.982760.98792
AdaBoost0.296270.316420.302340.286250.33232
Bi-GRU0.957050.971440.980440.980220.95597
Elman0.977900.947050.977310.976770.97841
RF0.976050.974820.974140.971970.97792
Table 8. Comparison table of RMSE for different models.
Table 8. Comparison table of RMSE for different models.
Prediction Model1st RMSE2nd RMSE3rd RMSE4th RMSE5th RMSE
BO-CNN-LSTM9.18849.40269.40210.09918.7402
AdaBoost64.535460.806159.899162.263560.3625
Bi-GRU18.919813.01110.758110.819419.1550
Elman11.436921.005611.589011.724911.3030
RF14.128614.486414.681115.284913.5653
Table 9. Comparison table of MAE for different models.
Table 9. Comparison table of MAE for different models.
Prediction Model1st MAE2nd MAE3rd MAE4th MAE5th MAE
BO-CNN-LSTM6.69136.49686.41727.14436.0783
AdaBoost24.62924.432724.761225.312426.2463
Bi-GRU13.64449.26007.38487.366813.8743
Elman7.343613.97677.41697.46727.3029
RF9.13129.53429.75319.99438.7105
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, Q.; Yang, Z.; Gao, J.; Lau, Y.-y.; Zhang, P. Prediction of Ship Estimated Time of Arrival Based on BO-CNN-LSTM Model. J. Mar. Sci. Eng. 2026, 14, 694. https://doi.org/10.3390/jmse14080694

AMA Style

Chen Q, Yang Z, Gao J, Lau Y-y, Zhang P. Prediction of Ship Estimated Time of Arrival Based on BO-CNN-LSTM Model. Journal of Marine Science and Engineering. 2026; 14(8):694. https://doi.org/10.3390/jmse14080694

Chicago/Turabian Style

Chen, Qiong, Zhipeng Yang, Jiaqi Gao, Yui-yip Lau, and Pengfei Zhang. 2026. "Prediction of Ship Estimated Time of Arrival Based on BO-CNN-LSTM Model" Journal of Marine Science and Engineering 14, no. 8: 694. https://doi.org/10.3390/jmse14080694

APA Style

Chen, Q., Yang, Z., Gao, J., Lau, Y.-y., & Zhang, P. (2026). Prediction of Ship Estimated Time of Arrival Based on BO-CNN-LSTM Model. Journal of Marine Science and Engineering, 14(8), 694. https://doi.org/10.3390/jmse14080694

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop