You are currently on the new version of our website. Access the old version .
  • This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
  • Article
  • Open Access

21 January 2026

Classification of Double-Bottom U-Shaped Weld Joints Using Synthetic Images and Image Splitting †

and
Department of Marine Design Convergence Engineering, Pukyong National University, Busan 48513, Republic of Korea
*
Author to whom correspondence should be addressed.
This article is a revised and substantially expanded version of a conference presentation/abstract entitled Classification of Ship Double-Bottom Weld Joints Using Synthetic Images. In Proceedings of the 2025 Fall Academic Conference and General Assembly of the Korean Society of Ocean Engineers, Jeju, Republic of Korea, 29–31 October 2025.
This article belongs to the Section Ocean Engineering

Abstract

The shipbuilding industry relies heavily on welding, which accounts for approximately 70% of the overall production process. However, the recent decline in skilled workers, together with rising labor costs, has accelerated the automation of shipbuilding operations. In particular, the welding activities are concentrated in the double-bottom region of ships, where collaborative robots are increasingly introduced to alleviate workforce shortages. Because these robots must directly recognize U-shaped weld joints, this study proposes an image-based classification system capable of automatically identifying and classifying such joints. In double-bottom structures, U-shaped weld joints can be categorized into 176 types according to combinations of collar plate type, slot, watertight feature, and girder. To distinguish these types, deep learning-based image recognition is employed. To construct a large-scale training dataset, 3D Computer-Aided Design (CAD) models were automatically generated using Open Cascade and subsequently rendered to produce synthetic images. Furthermore, to improve classification performance, the input images were split into left, right, upper, and lower regions for both training and inference. The class definitions for each region were simplified based on the presence or absence of key features. Consequently, the classification accuracy was significantly improved compared with an approach using non-split images.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.