A Comparative Study of Machine Learning Models for Predicting Vessel Dwell Time Estimation at a Terminal in the Busan New Port
Abstract
:1. Introduction
1.1. Research Background
1.2. Scope of the Study and Research Area
1.3. Literature Review
1.4. Research Objective
1.5. Contributions
- The machine learning-based regression mode: a pioneering aspect of this study lies in the development of a novel machine learning-based regression model. This model was meticulously trained using comprehensive historical data on berth schedules, spanning an impressive 41-month duration. This extensive dataset served as the foundation for training and rigorously validating the model’s predictive capabilities.
- Enhanced voyage planning and terminal operations: the outcomes of this research offer tangible benefits not only to shipping companies but also to terminal operators. By enabling more effective voyage planning for shipping firms, the study facilitates streamlined interactions between vessels and terminals, reminiscent of the concept of the just-in-time arrival policy. This alignment fosters improved overall operational efficiency.
- Efficiency with simple data: remarkably, this study achieved commendable results using a straightforward and basic dataset consisting of previous berth schedules and vessel particulars. The model’s performance surpasses that of the reference model, underscoring the effectiveness of its approach.
2. Materials and Methods
2.1. Research Flow
2.2. Dataset Configuration
2.2.1. Data Collection
2.2.2. Data Exploration
2.2.3. Splitting Dataset
2.3. Data Preprocessing
2.3.1. Removing Outliers
2.3.2. Feature Engineering
2.3.3. Covariate Shift Detection
2.3.4. Dimensionality Reduction
2.4. Machine-Learning Models for Regression Tasks
2.4.1. AdaBoost Regressor [44]
2.4.2. GradientBoost [45]
2.4.3. LGBM Regressor [46]
2.4.4. XGB Regressor [47]
2.4.5. CatBoost Regressor [48]
2.4.6. Random Forest Regressor [49]
2.5. Machine Learning Models Training
2.5.1. Error Metrics
- Mean Absolute Error (MAE) [50]
- Mean Squared Error (MSE) [50]
- Rooted Mean Squared Error (RMSE) [50]
- R-squared (R2 score) [50]
- Adjusted R-squared (adjusted R2 score) [50]
2.5.2. Hyperparameter Tuning [51]
2.6. Reference Model
3. Results
3.1. Model Prediction Results
3.2. Hyperparameter Tuning Results
3.3. References Result and Comparison
4. Discussion
4.1. Result Analysis
- Statistics and visualization approach: initially, the data collection process aimed for a larger dataset, assuming it would yield better results. However, during the analysis, we recognized the significance of considering trends in container movement and vessel capacity. For example, larger vessels with capacities exceeding 10,000 TEUs increased port calls at the terminal more than smaller vessels. Smaller vessels tended to have shorter stays and carried fewer containers. As terminal efficiency and container handling demands increased [33], more frequent port calls became prevalent. We also employed visualization tools such as Matplotlib and Seaborn, as illustrated in Figure 3, Figure 4, Figure 7, Figure 8 and Figure 9, to explore dataset distributions and trends. This proactive approach allowed us to filter and assess data based on distribution and trend characteristics.
- Data normalization: scaling data is a crucial step in machine learning to ensure consistent and effective model training. We utilized the standard scaler [54], a data scaling technique, to normalize the input features before training our models. Standard scaling transforms each feature into a mean of zero and a standard deviation of one. This technique benefits algorithms sensitive to feature scaling differences, promoting robustness, faster convergence, and better feature importance selection.
- Feature selection: feature selection played a significant role in refining our models. Features were removed based on their median threshold feature importance value, calculated using the SelectFromModel (SFM) class in the scikit-learn library. This process helped us identify which features were most useful for model training. To assess the impact of this technique, we conducted a training experiment using only the top four features by importance, including “totalLoad”, “workingYear”, “discharging”, and “loading”. The results, as shown in Table 7, revealed that while the validation set results were similar to the initial model, those on the test set significantly differed. This observation suggests that factors influencing vessel dwell time, traditionally defined by various studies [10,11,12,19,20,21,22,24,31], may not be universally applicable. Instead, these factors may vary depending on terminal-specific policies and operational dynamics, highlighting the importance of selecting features that align with the specific terminal’s historical data when estimating vessel dwell times.
- Hyperparameter tuning: to optimize model performance, hyperparameter tuning was conducted using the grid search cross-validation (GridSearchCV) technique. This method systematically explores a predefined hyperparameter grid to identify the optimal combination for each model. The adoption of hyperparameter tuning enhanced model performance and facilitated the identification of the best parameters for each model.
4.2. Additional Validation by Varying Test Periods
4.3. Limitations of This Study
5. Conclusions
5.1. Summary
5.2. Future Works
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A. Shipping Company Code
Company Code | Full Name |
---|---|
MSC | MEDITERRANEAN SHIPPING COMPANY S. A. (MSC) |
MAE | MAERSK SEALAND |
SKR | SINOKOR MERCHANT MARINE CO., LTD |
ONE | Ocean Network Express (ONE) |
ZIM | ZIM INTEGRATED SHIPPING SERVICES LTD |
HLC | HAPAG-LLOYD AG |
COS | CHINA OCEAN SHIPPING (GROUP) CO. |
COH | COSCO SHIPPING KOREA CO. |
HMM | HMM CO., LTD |
HAS | HEUNG-A SHIPPING CO., LTD |
OOL | ORIENT OVERSEAS CONTAINER LINE (OOL) |
BLA | BEN LINE AGENCY |
APL | AMERICAN PRESIDENT LINES., LTD. |
DJS | DONGJIN SHIPPING CO., LTD |
Appendix B. Correlation Heatmap
References
- Unctad. Review of Maritime Transport 2021; UN: New York, NY, USA, 2021. [Google Scholar]
- Robinson, R. Size of vessels and turnround time: Further evidence from the port of Hong Kong. J. Transp. Econ. Policy 1978, 12, 161–178. [Google Scholar]
- De Langen, P.; Nidjam, M.; Van der Horst, M. New indicators to measure port performance. J. Marit. Res. 2007, 4, 23–36. [Google Scholar]
- List, L.S. One Hundred Container Ports 2023. 2023. Available online: https://lloydslist.maritimeintelligence.informa.com/one-hundred-container-ports-2023 (accessed on 30 August 2023).
- Huynh, N. Analysis of container dwell time on marine terminal throughput and rehandling productivity. J. Int. Logist. Trade 2008, 6, 69–89. [Google Scholar] [CrossRef]
- Hassan, R.; Gurning, R.O.S. Analysis of the container dwell time at container terminal by using simulation modelling. Int. J. Mar. Eng. Innov. Res. 2020, 5, 34–43. [Google Scholar] [CrossRef]
- Kgare, T.; Raballand, G.; Ittmann, H.W. Cargo Dwell Time in Durban: Lessons for Sub-Saharan African Ports; World Bank Policy Research Working Paper 5794; World Bank Group: Washington, DC, USA, 2011. [Google Scholar]
- Kourounioti, I.; Polydoropoulou, A.; Tsiklidis, C. Development of models predicting dwell time of import containers in port container terminals—An Artificial Neural Networks application. Transp. Res. Procedia 2016, 14, 243–252. [Google Scholar] [CrossRef]
- Polydoropoulou, A.; Kourounioti, I. Identification of container dwell time determinants using aggregate data. Identification of Container Dwell Time Determinants Using Aggregate Data. Int. J. Transp. Econ. 2017, 44, 567–588. [Google Scholar]
- Mapotsi, T.B. Factors Affecting Vessel Turnaround Time at the Port of Richards Bay Dry Bulk Terminal; University of KwaZulu-Natal: Westville, South Africa, 2019. [Google Scholar]
- Nyema, S.M. Factors influencing container terminals efficiency: A case study of Mombasa entry port. Eur. J. Logist. Purch. Supply Chain. Manag. 2014, 2, 39–78. [Google Scholar]
- Rupasinghe, S.; Sigera, I.; Cahoon, S. The Analysis of Vessel Turnaround Time at Port of Colombo; University of Tasmania: Hobart, TAS, Australia, 2015. [Google Scholar]
- Zhang, H.; Kim, K.H. Maximizing the number of dual-cycle operations of quay cranes in container terminals. Comput. Ind. Eng. 2009, 56, 979–992. [Google Scholar] [CrossRef]
- Buhari, S.O.; Ndikom, O.; Nwokedi, T. An assessment of the relationship among cargo-throughput, vessel turnaround time and port-revenue in Nigeria (A study of Lagos port complex). J. Adv. Res. Bus. Manag. Account. 2017, 3, 1–13. [Google Scholar] [CrossRef]
- Zhen, H.; Merk, O.; Zhao, N.; Jing, L.; Xu, M.; Xie, W.; Du, X.; Wang, J. The Competitiveness of Global Port-Cities: The Case of Shanghai, China; OECD: Paris, France, 2013. [Google Scholar]
- Ming, N.S.; Shah, M.Z. Petroleum terminal’s operation processes on vessel turnaround time. In Proceedings of the EASTS International Symposium on Sustainable Transportation, University of Technology Malaysia, 12–13 August 2008; Available online: https://web.itu.edu.tr/~keceli/advancedportmanagement/liquid.pdf (accessed on 30 August 2023).
- Jayaprakash, P.O.; Gunasekaran, K. Measurement of Port Performance Utilising Service Time of Vessels. Int. J. Civ. Eng. Build. Mater. 2012, 2, 9. [Google Scholar]
- Đelović, D.; Mitrović, D.M. Some Considerations on Berth Productivity Referred on Dry Bulk Cargoes In A Multipurpose Seaport. Teh. Vjesn. Tech. Gaz. 2017, 24, 511–519. [Google Scholar]
- Loke, K.B.; Othman, M.R.; Saharuddin, A.H.; Fadzil, M.N. Analysis of variables of vessel calls in a container terminal. Open J. Mar. Sci. 2014, 4, 279. [Google Scholar] [CrossRef]
- Premathilaka, W.H. Determining the Factors Affecting the Turnaround Time of Container Vessels: A Case Study on Port of Colombo; World Maritime University: Malmo, Sweden, 2018. [Google Scholar]
- Siddaramaiah, D.G.; Karnoji, D.S.; Gurudev, V. Factors affecting the Vessel Turnaround time in a Seaport. In Proceedings of the 25th International Conference on Hydraulics, Odisha, India, 26–28 March 2021. [Google Scholar]
- Kokila, A.V.; Abijath, V. Reduction of Turnaround Time for Vessels at Cochin Port Trust. Int. J. Pure Appl. Math. 2017, 117, 917–922. [Google Scholar]
- Smith, D. Big data insights into container vessel dwell times. Transp. Res. Rec. 2021, 2675, 1222–1235. [Google Scholar] [CrossRef]
- Ducruet, C.; Itoh, H. Spatial network analysis of container port operations: The case of ship turnaround times. Netw. Spat. Econ. 2022, 22, 883–902. [Google Scholar] [CrossRef]
- Transportation, on freight. “impact of high oil prices on freight transportation: Modal shift potential in five corridors executive summary.” 2008. Available online: https://www.maritime.dot.gov/sites/marad.dot.gov/files/docs/resources/3761/modalshiftstudy-executivesummary.pdf (accessed on 30 August 2023).
- Moon, D.S.-H.; Woo, J.K. The impact of port operations on efficient ship operation from both economic and environmental perspectives. Marit. Policy Manag. 2014, 41, 444–461. [Google Scholar] [CrossRef]
- GEF-UNDP-IMO GloMEEP Project and members of the GIA. Just in Time Arrival Guide-Barriers and Potential Solutions. 2020. Available online: https://wwwcdn.imo.org/localresources/en/OurWork/PartnershipsProjects/Documents/GIA-just-in-time-hires.pdf (accessed on 30 May 2023).
- DCSA. Just-in-Time Port Call. 2023. Available online: https://dcsa.org/standards/jit-port-call/ (accessed on 30 May 2023).
- Jia, H.; Adland, R.; Prakash, V.; Smith, T. Energy efficiency with the application of Virtual Arrival policy. Transp. Res. Part D Transp. Environ. 2017, 54, 50–60. [Google Scholar] [CrossRef]
- Yoon, J.H.; Kim, D.H.; Yun, S.W.; Kim, H.J.; Kim, S. Enhancing Container Vessel Arrival Time Prediction through Past Voyage Route Modeling: A Case Study of Busan New Port. J. Mar. Sci. Eng. 2023, 11, 1234. [Google Scholar] [CrossRef]
- Mokhtar, K.; Shah, M.Z. A regression model for vessel turnaround time. In Tokyo Academic, Industry & Cultural Integration Tour, 2006; Shibaura Institute of Technology: Tokyo, Japan, 2006. [Google Scholar]
- Son, J.; Kim, D.H.; Yun, S.W.; Kim, H.J.; Kim, S. The development of regional vessel traffic congestion forecasts using hybrid data from an automatic identification system and a port management information system. J. Mar. Sci. Eng. 2022, 10, 1956. [Google Scholar] [CrossRef]
- JOC Group Inc. Berth Productivity: The Trends, Outlook and Market Forces Impacting Ship Turnaround Times; JOC Group Inc.: Newark, NJ, USA, 2017; p. 3. [Google Scholar]
- Dekking, F.M.; Kraaikamp, C.; Lopuhaä, H.P.; Meester, L.E. A Modern Introduction to Probability and Statistics: Understanding Why and How; Springer: Berlin/Heidelberg, Germany, 2005; Volume 488. [Google Scholar]
- Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning, Lille, France, 7–9 July 2015. [Google Scholar]
- Jolliffe, I.T.; Cadima, J. Principal component analysis: A review and recent developments. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 2016, 374, 20150202. [Google Scholar] [CrossRef]
- Van der Maaten, L.; Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
- Sánchez-Maroño, N.; Alonso-Betanzos, A.; Tombilla-Sanromán, M. Filter Methods for Feature Selection—A Comparative Study, Proceedings of the Intelligent Data Engineering and Automated Learning—IDEAL 2007, Birmingham, UK, 16–19 December 2007; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
- Kohavi, R.; John, G.H. Wrappers for feature subset selection. Artif. Intell. 1997, 97, 273–324. [Google Scholar] [CrossRef]
- Lal, T.N.; Chapelle, O.; Weston, J.; Elisseeff, A. Embedded Methods. In Feature Extraction: Foundations and Applications; Guyon, I., Ed.; Springer: Berlin/Heidelberg, Germany, 2006; pp. 137–165. [Google Scholar]
- Menze, B.H.; Kelm, B.M.; Masuch, R.; Himmelreich, U.; Bachert, P.; Petrich, W.; Hamprecht, F.A. A comparison of random forest and its Gini importance with standard chemometric methods for the feature selection and classification of spectral data. BMC Bioinform. 2009, 10, 213. [Google Scholar] [CrossRef] [PubMed]
- Breiman, L. Classification and Regression Trees; Routledge: London, UK, 2017. [Google Scholar]
- Jordan, M.I.; Mitchell, T.M. Machine learning: Trends, perspectives, and prospects. Science 2015, 349, 255–260. [Google Scholar] [CrossRef] [PubMed]
- Hastie, T.; Rosset, S.; Zhu, J.; Zou, H. Multi-class adaboost. Stat. Interface 2009, 2, 349–360. [Google Scholar] [CrossRef]
- Friedman, J.H. Stochastic gradient boosting. Comput. Stat. Data Anal. 2002, 38, 367–378. [Google Scholar] [CrossRef]
- Fan, J.; Ma, X.; Wu, L.; Zhang, F.; Yu, X.; Zeng, W. Light Gradient Boosting Machine: An efficient soft computing model for estimating daily reference evapotranspiration with local and external meteorological data. Agric. Water Manag. 2019, 225, 105758. [Google Scholar] [CrossRef]
- Chen, T.; Guestrin, C. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016. [Google Scholar]
- Prokhorenkova, L.; Gusev, G.; Vorobev, A.; Dorogush, A.V.; Gulin, A. CatBoost: Unbiased boosting with categorical features. Adv. Neural Inf. Process. Syst. 2018, 31. [Google Scholar] [CrossRef]
- Svetnik, V.; Liaw, A.; Tong, C.; Culberson, J.C.; Sheridan, R.P.; Feuston, B.P. Random forest: A classification and regression tool for compound classification and QSAR modeling. J. Chem. Inf. Comput. Sci. 2003, 43, 1947–1958. [Google Scholar] [CrossRef]
- Chicco, D.; Warrens, M.J.; Jurman, G. The coefficient of determination R-squared is more informative than SMAPE, MAE, MAPE, MSE and RMSE in regression analysis evaluation. PeerJ Comput. Sci. 2021, 7, e623. [Google Scholar] [CrossRef]
- Falkner, S.; Klein, A.; Hutter, F. BOHB: Robust and efficient hyperparameter optimization at scale. In Proceedings of the International Conference on Machine Learning, Vienna, Austria, 25–31 July 2018. [Google Scholar]
- Stone, M. Cross-validatory choice and assessment of statistical predictions. J. R. Stat. Soc. Ser. B (Methodol.) 1974, 36, 111–133. [Google Scholar] [CrossRef]
- Nishimura, E.; Imai, A.; Papadimitriou, S. Berth allocation planning in the public berth system by genetic algorithms. Eur. J. Oper. Res. 2001, 131, 282–292. [Google Scholar] [CrossRef]
- Quackenbush, J. Microarray data normalization and transformation. Nat. Genet. 2002, 32, 496–501. [Google Scholar] [CrossRef] [PubMed]
Berth | Company | Voyage | Vessel * | Time of Berth | Time of Departure | Loading Qty | Discharging Qty | Shifting Qty |
---|---|---|---|---|---|---|---|---|
T2(P) | BLA | V246005 | V246 | 1 January 2019 06:15:00 | 1 January 2019 16:00:00 | 139 | 211 | 0 |
Dataset | Number of Rows | Proportion | Timespan |
---|---|---|---|
Train (randomly split) | 2653 | 67.68% | January 2019~August 2022 (32 months) |
Validation (randomly split) | 664 | 16.96% | |
Test | 597 | 15.25% | September 2022~June 2023 (9 months) |
Total | 3914 | 100% | 41 months |
Feature | Importance |
---|---|
totalLoad | 8.035243 × 10−1 |
workingYear | 7.270265 × 10−2 |
discharging | 1.888127 × 10−2 |
loading | 1.803106 × 10−2 |
workingMonth | 1.556914 × 10−2 |
shift | 1.272530 × 10−2 |
LOA | 9.228835 × 10−2 |
capacity | 8.053313 × 10−3 |
grossTon | 7.731356 × 10−3 |
shipAge | 6.373691 × 10−3 |
width | 6.099715 × 10−3 |
yearBuilt | 5.785803 ×10−3 |
company_MAE | 2.269882 × 10−3 |
company_MSC | 2.165225 × 10−3 |
berth_T1 (Median) | 2.116896 × 10−3 |
berth_T2 | 1.906887 × 10−3 |
company_COH | 1.737024 × 10−3 |
berth_T3 | 1.385001 × 10−3 |
company_ZIM | 8.983881 × 10−4 |
company_ONE | 7.757470 × 10−4 |
company_SKR | 6.906458 × 10−4 |
company_HLC | 3.933310 × 10−4 |
company_HMM | 2.815237 × 10−4 |
company_COS | 2.066013 × 10−4 |
company_infrequent | 1.809573 × 10−4 |
company_HAS | 1.498511 × 10−4 |
company_OOL | 1.069325 × 10−4 |
company_BLA | 2.766817 × 10−5 |
company_DJS | 9.507312 × 10−7 |
Model | Parameter | Possible Value List |
---|---|---|
AdaBoost | n_estimators | [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 20, 30, 40, 50, 100, 150] |
learning_rate | [0.001, 0.01, 0.1,0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1] | |
GradientBoost | n_estimators | [10, 20, 30, 40, 50, 100, 150] |
learning_rate | [0.0001, 0.001, 0.01, 0.1, 1, 10, 100] | |
max_depth | [None, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] | |
LGBMRegressor | n_estimators | [50, 100, 150] |
learning_rate | [0.0001, 0.001, 0.01, 0.1, 1, 10, 100] | |
max_depth | [None, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] | |
num_leaves | [2, 4, 6, 8, 10, 12, 15, 30, 31] | |
XGBRegressor | n_estimators | [10, 20, 30, 40, 50, 100, 150, 200, 250, 300] |
learning_rate | [0.0001, 0.001, 0.01, 0.1, 1] | |
max_depth | [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] | |
booster | [gbtree, gblinear, dart] | |
RandomForest | n_estimators | [50, 100, 150, 200, 250, 300] |
max_depth | [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] | |
CatBoostRegressor | iterations | [50, 100, 150, 200, 250, 300] |
learning_rate | [0.0001, 0.001, 0.01] | |
max_depth | [None, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] | |
l2_leaf_reg | [0.2, 2, 5, 10, 20] |
Dataset | Model | MSE * | RMSE * | MAE * | R2 Score * | Adjusted R2 * |
---|---|---|---|---|---|---|
Validation | AdaBoost | 94,785.37 | 307.87 | 236.79 | 0.77 | 0.76 |
GradientBoost | 69,828.51 | 264.25 | 182.20 | 0.83 | 0.82 | |
LGBMRegressor | 74,334.73 | 265.02 | 182.47 | 0.83 | 0.82 | |
XGBRegressor | 69,638.51 | 263.89 | 180.76 | 0.83 | 0.82 | |
RandomForest | 73,022.97 | 270.23 | 186.28 | 0.82 | 0.82 | |
CatBoostRegressor | 71,168.17 | 266.77 | 189.83 | 0.82 | 0.82 | |
Test | AdaBoost | 102,045.05 | 319.44 | 248.95 | 0.75 | 0.74 |
GradientBoost | 102,545.59 | 320.23 | 256.62 | 0.75 | 0.74 | |
LGBMRegressor | 104,920.72 | 323.91 | 260.28 | 0.74 | 0.74 | |
XGBRegressor | 101,342.16 | 318.34 | 255.24 | 0.75 | 0.75 | |
RandomForest | 106,842.78 | 326.87 | 259.06 | 0.74 | 0.73 | |
CatBoostRegressor | 94,295.66 | 307.08 | 248.47 | 0.77 | 0.76 |
Model | Parameter | Hyper-Value | R2 Score * | Tuning Duration |
---|---|---|---|---|
AdaBoost | n_estimators | 8 | 0.82 | 0:00:26 |
learning_rate | 0.4 | |||
GradientBoost | n_estimators | 50 | 0.86 | 0:05:22 |
learning_rate | 0.1 | |||
max_depth | 4 | |||
LGBMRegressor | n_estimators | 50 | 0.86 | 0:09:32 |
learning_rate | 0.1 | |||
max_depth | 8 | |||
num_leaves | 15 | |||
XGBRegressor | n_estimators | 50 | 0.86 | 0:19:44 |
learning_rate | 0.1 | |||
max_depth | 4 | |||
booster | gbtree | |||
RandomForest | n_estimators | 250 | 0.85 | 0:01:03 |
max_depth | 7 | |||
CatBoostRegressor | iterations | 300 | 0.85 | 0:52:28 |
learning_rate | 0.01 | |||
max_depth | 9 | |||
l2_leaf_reg | 0.2 |
Dataset (with Four Features) | Model | MSE * | RMSE * | MAE * | R2 Score * | Adjusted R2 * |
---|---|---|---|---|---|---|
Validation | AdaBoost | 92,436.03 | 304.03 | 228.14 | 0.77 | 0.77 |
GradientBoost | 76,503.75 | 276.59 | 193.59 | 0.81 | 0.81 | |
LGBMRegressor | 74,334.73 | 272.64 | 189.74 | 0.82 | 0.82 | |
XGBRegressor | 74,460.39 | 272.87 | 188.41 | 0.82 | 0.82 | |
RandomForest | 76,809.47 | 277.15 | 192.07 | 0.81 | 0.81 | |
CatBoostRegressor | 75,157.26 | 274.15 | 194.83 | 0.81 | 0.81 | |
Test | AdaBoost | 148,320.86 | 385.12 | 285.97 | 0.64 | 0.63 |
GradientBoost | 162,420.81 | 403.01 | 302.61 | 0.60 | 0.60 | |
LGBMRegressor | 159,764.90 | 399.71 | 299.39 | 0.61 | 0.61 | |
XGBRegressor | 163,696.40 | 404.59 | 304.05 | 0.60 | 0.60 | |
RandomForest | 166,919.44 | 408.56 | 307.55 | 0.59 | 0.59 | |
CatBoostRegressor | 157,256.97 | 396.56 | 293.88 | 0.61 | 0.61 |
Periods | 8 Week * | 4 Week * | 3 Week * | 2 Week * | 1 Week * | 3 Day * | 2 Day * | 1 Day * | |
---|---|---|---|---|---|---|---|---|---|
Model | |||||||||
AdaBoost | 242.396 | 250.935 | 236.639 | 248.752 | 235.696 | 248.439 | 244.896 | 243.515 | |
GradientBoost | 244.078 | 246.804 | 245.425 | 250.212 | 254.591 | 260.625 | 241.191 | 240.822 | |
LGBMRegressor | 249.272 | 252.415 | 247.249 | 263.452 | 266.728 | 263.309 | 248.555 | 246.057 | |
XGBRegressor | 242.385 | 249.663 | 243.872 | 251.331 | 254.349 | 259.973 | 243.866 | 241.726 | |
RandomForest | 245.394 | 257.632 | 249.070 | 257.939 | 262.667 | 254.654 | 244.358 | 243.338 | |
CatBoostRegressor | 250.829 | 243.101 | 241.473 | 246.606 | 238.345 | 249.675 | 238.043 | 233.003 | |
Reference | 256.294 | 269.475 | 254.000 | 270.398 | 271.590 | 245.833 | 279.521 | 277.598 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yoon, J.-H.; Kim, S.-W.; Jo, J.-S.; Park, J.-M. A Comparative Study of Machine Learning Models for Predicting Vessel Dwell Time Estimation at a Terminal in the Busan New Port. J. Mar. Sci. Eng. 2023, 11, 1846. https://doi.org/10.3390/jmse11101846
Yoon J-H, Kim S-W, Jo J-S, Park J-M. A Comparative Study of Machine Learning Models for Predicting Vessel Dwell Time Estimation at a Terminal in the Busan New Port. Journal of Marine Science and Engineering. 2023; 11(10):1846. https://doi.org/10.3390/jmse11101846
Chicago/Turabian StyleYoon, Jeong-Hyun, Se-Won Kim, Ji-Sung Jo, and Ju-Mi Park. 2023. "A Comparative Study of Machine Learning Models for Predicting Vessel Dwell Time Estimation at a Terminal in the Busan New Port" Journal of Marine Science and Engineering 11, no. 10: 1846. https://doi.org/10.3390/jmse11101846
APA StyleYoon, J.-H., Kim, S.-W., Jo, J.-S., & Park, J.-M. (2023). A Comparative Study of Machine Learning Models for Predicting Vessel Dwell Time Estimation at a Terminal in the Busan New Port. Journal of Marine Science and Engineering, 11(10), 1846. https://doi.org/10.3390/jmse11101846