Next Article in Journal
Investigation of Factors Affecting the Performance of Textronic UHF RFID Transponders
Previous Article in Journal
Transcranial Ultrasonic Focusing by a Phased Array Based on Micro-CT Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Visual Sensing and Depth Perception for Welding Robots and Their Industrial Applications

by
Ji Wang
1,2,
Leijun Li
1,* and
Peiquan Xu
2,*
1
Department of Chemical and Materials Engineering, University of Alberta, Edmonton, AB T6G 1H9, Canada
2
School of Materials Science and Engineering, Shanghai University of Engineering Science, Shanghai 201620, China
*
Authors to whom correspondence should be addressed.
Sensors 2023, 23(24), 9700; https://doi.org/10.3390/s23249700
Submission received: 14 November 2023 / Revised: 30 November 2023 / Accepted: 6 December 2023 / Published: 8 December 2023
(This article belongs to the Special Issue Intelligent Robotics Sensing Control System)

Abstract

:
With the rapid development of vision sensing, artificial intelligence, and robotics technology, one of the challenges we face is installing more advanced vision sensors on welding robots to achieve intelligent welding manufacturing and obtain high-quality welding components. Depth perception is one of the bottlenecks in the development of welding sensors. This review provides an assessment of active and passive sensing methods for depth perception and classifies and elaborates on the depth perception mechanisms based on monocular vision, binocular vision, and multi-view vision. It explores the principles and means of using deep learning for depth perception in robotic welding processes. Further, the application of welding robot visual perception in different industrial scenarios is summarized. Finally, the problems and countermeasures of welding robot visual perception technology are analyzed, and developments for the future are proposed. This review has analyzed a total of 2662 articles and cited 152 as references. The potential future research topics are suggested to include deep learning for object detection and recognition, transfer deep learning for welding robot adaptation, developing multi-modal sensor fusion, integrating models and hardware, and performing a comprehensive requirement analysis and system evaluation in collaboration with welding experts to design a multi-modal sensor fusion architecture.

1. Introduction

The interaction between cameras and welding lies in the integration of technology, vision, and field plots for controlling the welding process [1,2]. As we embrace the rapid development of artificial intelligence [3], the prospects for research and development in the automation and intelligence of robotic welding have never been more promising [4,5,6]. Scientists, engineers, and welders have been exploring new methods for automated welding. Over the past few decades, as shown in Figure 1, numerous sensors have been developed for welding, including infrared sensors [7], vision sensors [8,9], temperature sensors [10], acoustic sensors [11], arc sensors [12], and force sensors [13].
The vision sensor stands out as one of the sensors with immense development potential. This device leverages optical principles and employs image processing algorithms to capture images while distinguishing foreground objects from the background. Essentially, it amalgamates the functionalities of a camera with sophisticated image processing algorithms to extract valuable signals from images [14].
Vision sensors find widespread application in industrial automation and robotics, serving various purposes including inspection, measurement, object detection, quality control, and navigation [15]. These versatile tools are employed across industries such as manufacturing, food safety [16], automotives, electronics, pharmaceuticals, logistics, and unmanned aerial vehicles [17]. Their utilization significantly enhances efficiency, accuracy, and productivity by automating visual inspection and control processes.
A vision sensor may also include other features such as lighting systems to enhance image quality, communication interfaces for data exchange, and integration with control systems or robots. It works in a variety of lighting conditions for detecting complex patterns, colors, shapes, and textures. Vision sensors can process visual information in real time, allowing automated systems to make decisions and take actions.
Vision sensors for welding have the characteristics of non-contact measurement, versatility, high precision, and real-time sensing [18], providing powerful information for the automated control of welding [19]. However, extracting depth information is challenging in the application of vision sensors. Depth perception is the ability to perceive the three-dimensional (3D) world through measuring the distance to objects [20,21] by using a visual system [22,23,24] mimicking human stereoscopic vision and the accommodative mechanism of the human eye [25,26,27,28]. Depth perception has a wide range of applications [29,30], such as intelligent robots [31,32], facial recognition [33,34], medical imaging [35], food delivery robots [36], intelligent healthcare [37], autonomous driving [38], virtual reality and augmented reality [39], object detection and tracking [40], human–computer interaction [41], 3D reconstruction [42], and welding robots [43,44,45].
The goal of this review is to summarize and interpret the research in depth perception and its application to welding vision sensors and evaluate some examples of robotic welding based on vision sensors.
Review [46] focuses on structured light sensors for intelligent welding robots. Review [47] focuses on vision-aided robotic welding, including the detection of various groove and joint types using active and passive visual sensing methods. Review [48] focuses on visual perception for different forms of industry intelligence. Review [49] focuses on deep learning methods for vision systems intended for Construction 4.0. The difference our review provides is a comprehensive analysis of visual sensing and depth perception. We contribute to visual sensor technology, welding robot sensors, computer vision-based depth perception methods, and the industrial applications of perception to welding robots.

2. Research Method

This article focuses on visual sensing and depth perception for welding robots, as well as the industrial applications. We conducted a literature review and evaluated from several perspectives, including welding robot sensors, machine vision-based depth perception methods, and the welding robot sensors used in industry.
We searched for relevant literature in the Web of Science database using the search term “Welding Sensors”. A total of 2662 articles were retrieved. As shown in Figure 2, these articles were categorized into subfields and the top 10 fields, and their respective number of articles were plotted. From each subfield, we selected representative articles and reviewed them further. Valuable references from their bibliographies were subsequently collected.
In total, we selected 152 articles as references for this review. Our criterion for literature selection was the quality of the articles, specifically focusing on the following:
  • Relevance to technologies of visual sensors for welding robots.
  • Sensors used in the welding process.
  • Depth perception methods based on computer vision.
  • Welding robot sensors used in industry.

3. Sensors for Welding Process

Figure 3 shows a typical laser vison sensor used for a welding process. If there are changes in the joint positions, the sensors used for searching the welding seam will provide real-time information to the robot controller. Commonly used welding sensors include thru-arc seam tracking (TAST) sensors, arc voltage control (AVC) sensors, touch sensors, electromagnetic sensors, supersonic sensors, laser vision sensors, etc.

3.1. Thru-Arc Seam Tracking (TAST) Sensors

In 1990, Siores [50] achieved weld seam tracking and the control of weld pool geometry using the arc as a sensor. The signal detection point is the welding arc, eliminating sensor positioning errors and being unaffected by arc spatter, smoke, or arc glare, making it a cost-effective solution. Comprehensive mathematical models [51,52] have been developed and successfully applied to automatic weld seam tracking in arc welding robots and automated welding equipment. Commercial robot companies have equipped their robots such sensing devices [53].
Arc sensor weld seam tracking utilizes the arc as a sensor to detect changes in the welding current caused by variations in the arc length [54]. The sensing principle is because when the arc position changes, the electrical parameters of the arc also change, primarily in the distance between the welding nozzle and the surface of the workpiece. From this, the relative position deviation between the welding gun and the weld seam can be derived from the arc oscillation pattern. In many cases, the typical thru-arc seam tracking (TAST) control method can optimize the weld seam tracking performance by adjusting various variables.
The advantages of TAST as a weld seam tracking method are its low cost, as it only requires a welding current sensor as hardware. However, it requires the construction of a weld seam tracking control model, where the robot adjusts the torch position in response to the welding current feedback.

3.2. Arc Voltage Control (AVC) Sensors

In gas tungsten arc welding (GTAW), there is a proportional relationship between the arc voltage and arc length. AVC sensors are used to monitor changes in the arc voltage when there are variations in the arc length, providing feedback to control the torch height [55]. Due to their lower sensitivity to arc length signals, AVC sensors are primarily used for vertical tracking, and, less frequently, are used for horizontal weld seam tracking. The establishment of an AVC sensing model is relatively simple and can be used in both pulsed current welding and constant current welding.

3.3. Laser Sensors

Due to material or process limitations, certain welding processes, such as thin plate welding, cannot utilize arc sensors for weld seam tracking. Additional sensors on the robotic system are required; a popular choice are laser sensors.
Laser sensors do not require an arc model and can determine the welding joint position before welding begins. When there are changes in the joint, the robot dynamically adjusts the welding parameters or corrects the welding path deviations in real time [56]. Laser sensor systems are relatively complex and have stringent requirements for the welding environment. Since the laser sensor is installed on the welding torch, it may limit the accessibility of the torch to the welding joint. An associated issue is that it introduces the inconsistency between the position of the laser sensor’s detection point and the welding point, known as sensor positioning lead error.

3.4. Contact Sensing

Contact sensors do not require any weld seam tracking control functions. Instead, they find the weld seam before initiating the arc and continuously adjust the position deviation along the entire path. The robot operates in a search mode, using contact to gather the three-dimensional positional information of the weld seam. The compensation for the detected deviation is then transmitted to the robot controller.
Typical contact-based weld seam tracking sensors rely on probes that roll or slide within the groove to reflect the positional deviation between the welding torch and the weld seam [57]. They utilize microswitches installed within the sensor to determine the polarity of the deviation, enabling weld seam tracking. Contact sensors are suitable for X-and Y-shaped grooves, narrow gap welds, and fillet welds. Contact sensors are widely used in seam tracking, because of their simple system structure, easy operation, low cost, and the fact they are not affected by arc smoke or spatter. However, they have some drawbacks, including different groove types requiring different probes, and the probes potentially experiencing significant wear and deform easily, which are not suitable for high-speed welding processes.

3.5. Ultrasonic Sensing

The detection principle of ultrasonic weld seam tracking sensors is as follows: Ultrasonic waves are emitted by the sensor and when they reach the surface of the welded workpiece, they are reflected and received by the ultrasonic sensor. By calculating the time interval between the emission and reception of the ultrasonic waves, the distance between the sensor and the workpiece can be determined. For weld seam tracking, the edge-finding method is used to detect the left and right edge deviations of the weld seam. Ultrasonic sensing can be applied in welding methods such as GTAW welding and submerged arc welding (SAW) and enable the automatic recognition of the welding workpiece [58,59]. Ultrasonic sensing offers significant advantages in the field of welding, including non-contact measurement, high precision, real-time monitoring, and wide frequency adaptability. By eliminating interference with the welding workpiece and reducing sensor wear, it ensures the accuracy and consistency of weld joints. Furthermore, ultrasonic sensors enable the prompt detection of issues and defects, empowering operators to take timely actions and ensure welding quality. However, there are limitations to ultrasonic sensing, such as high costs, stringent environmental requirements, material restrictions, near-field detection sensitivity, and operational complexities. Therefore, when implementing ultrasonic sensing, a comprehensive assessment of specific requirements, costs, and technological considerations is essential.

3.6. Electromagnetic Sensing

Electromagnetic sensors utilize the changes in induced currents in sensing coils caused by variations in the induced currents in the surrounding metal near the sensor. This allows the sensor to perceive the position deviations for the welding joint. Dual electromagnetic sensors can detect the offset of the weld seam from the center position of the sensor [60,61]. They are particularly suitable for butt welding processes of structural profiles, especially for detecting position deviations in welding joints with painted surfaces, markings, and scratches. They can also achieve the automatic recognition of gapless welding joint positions. Kim et al. [62] developed dual electromagnetic sensors for the arc welding process of I-shaped butt joints in structural welding. They performed weld seam tracking by continuously correcting the offset of the sensor’s position in real time.

3.7. Vision Sensor

Vision sensing systems can be divided into active vision sensors and passive vision sensors according to the imaging light source in the vision system. Passive vision sensors are mainly used for extracting welding pool information, analyzing the transfer of molten droplets, recognizing weld seam shapes, and weld seam tracking. In [63], a passive optical image sensing system with secondary filtering capability for the intelligent extraction of aluminum alloy welding pool images was proposed based on spectral analysis, which obtained clear images of aluminum alloy welding pools.
Active vision sensors utilize additional imaging light sources, typically lasers. The principle is to use a laser diode and a CCD camera to form a vision sensor. The red light emitted by the laser diode is reflected in the welding area and enters the CCD camera. The relative position of the laser beam in the image is used to determine the three-dimensional information of the weld seam [64,65,66]. To prevent interference from the complex spectral composition of the welding arc, and to improve the imaging quality, specific wavelength lasers can be used to isolate the arc light. Depth calculation methods include Fourier transform, phase measurement, Moiré contouring, and optical triangulation. Essentially, they analyze the spatial light field modulated by the surface of the object to obtain the three-dimensional information of the welded workpiece.
Both passive and active vision sensing systems can achieve two-dimensional or three-dimensional vision for welding control. Two-dimensional sensing is mainly used for weld seam shape recognition and monitoring of the welding pool. Three-dimensional sensing can construct models of important depth information for machine vision [67,68].

4. Depth Perception Method Based on Computer Vision

Currently, 3D reconstruction has been widely applied in robotics [69], localization and navigation [70], and industrial manufacturing [71]. Figure 4 illustrates the two categories of methods for deep computation. The traditional 3D reconstruction algorithms are based on multi-view geometries. These algorithms utilize image or video data captured from multiple viewpoints and employ geometric calculations and disparity analysis to reconstruct the geometric shape and depth information of objects in the 3D space. Methods based on multi-view geometry typically involve camera calibration, image matching, triangulation, and voxel filling steps to achieve high-quality 3D reconstructions.
Figure 5 describes the visual perception for welding robots based on deep learning, including 3D reconstruction. Deep learning algorithms leverage convolutional neural networks (CNNs) to tackle the problem of 3D reconstruction. By applying deep learning models to image or video data, these algorithms can acquire the 3D structure and depth information of objects through learning and inference. Through end-to-end training and automatic feature learning, these algorithms can overcome the limitations of traditional approaches and achieve better performance in 3D reconstruction.

4.1. Traditional Methods for 3D Reconstruction Algorithms

Traditional 3D reconstruction algorithms can be classified into two categories according to whether the sensor actively illuminates the objects or not [72]. The active methods utilize laser, sound, or electromagnetic waves to emit toward the target objects and to receive the reflected waves. The passive methods rely on cameras capturing the reflection of the ambient environment (e.g., natural light), and specific algorithms to calculate the 3D spatial information of the objects.
In the active methods, by measuring the changes in the properties of the returned light waves, sound waves, or electromagnetic waves, the depth information of the objects can be inferred. The precise calibration and synchronization of hardware devices and sensors are required to ensure the accuracy and reliability.
In contrast, for the passive methods, the captured images are processed by algorithms to obtain the objects’ 3D spatial information [73,74]. These algorithms typically involve feature extraction, matching, and triangulation to infer the depth and shape information of the objects in the images.

4.1.1. Active Methods

Figure 6 shows schematic diagrams of several active methods. Table 1 summarizes the relevant literature on the active methods.
Table 1. Active approaches in the selected papers.
Table 1. Active approaches in the selected papers.
YearMethodDescriptionReferences
2019Structured lightA new active light field depth estimation method is proposed.[75]
2015Structured lightA structured light system for enhancing the surface texture of objects is proposed.[76]
2021Structured lightA global cost minimization framework is proposed for depth estimation using phase light field and re-formatted phase epipolar plane images.[77]
2024Structured lightA novel active stereo depth perception method based on adaptive structured light is proposed.[78]
2023Structured lightA parallel CNN transformer network is proposed to achieve an improved depth estimation for structured light images in complex scenes.[79]
2022Time-of-Flight (TOF)DELTAR is proposed to enable lightweight Time-of-Flight sensors to measure high-resolution and accurate depth by collaborating with color images.[80]
2020Time-of-Flight (TOF)Based on the principle and imaging characteristics of TOF cameras, a single pixel is considered as a continuous Gaussian source, and its differential entropy is proposed as an evaluation parameter.[81]
2014Time-of-Flight (TOF)Time-of-Flight cameras are presented and common acquisition errors are described.[82]
2003TriangulationA universal framework is proposed based on the principle of triangulation to address various depth recovery problems.[83]
2021TriangulationLaser power is controlled via triangulation camera in a remote laser welding system.[84]
2020TriangulationA data acquisition system is assembled based on differential laser triangulation method.[85]
2017Laser scanningThe accuracy of monocular depth estimation is improved by introducing 2D plane observations from the remaining laser rangefinder without any additional cost.[86]
2021Laser scanningAn online melt pool depth estimation technique is developed for the directed energy deposition (DED) process using a coaxial infrared (IR) camera, laser line scanner, and artificial neural network (ANN).[87]
2018Laser scanningAn automatic crack depth measurement method using image processing and laser methods is developed.[88]
Figure 6. Depth perception based on laser line scanner and coaxial infrared camera for directed energy deposition (DED) process. Additional explanations for the symbols and color fields can be found in [87]. Reprinted with permission from [87].
Figure 6. Depth perception based on laser line scanner and coaxial infrared camera for directed energy deposition (DED) process. Additional explanations for the symbols and color fields can be found in [87]. Reprinted with permission from [87].
Sensors 23 09700 g006
Structured light—a technique that utilizes a projector to project encoded structured light onto the object being captured, which is then recorded by a camera [75]. This method relies on the differences in the distance and direction between the different regions of the object relative to the camera, resulting in variations in the size and shape of the projected pattern. These variations can be captured by the camera and processed by a computational unit to convert them into depth information, thus acquiring the three-dimensional contour of the object [76]. However, structured light has some drawbacks, such as susceptibility to interference from ambient light, leading to poor performance in outdoor environments. Additionally, as the detection distance increases, the accuracy of structured light decreases. To address these issues, current research efforts have employed strategies such as increasing power and changing coding methods [77,78,79].
Time-of-Flight (TOF)—a method that utilizes continuous light pulses and measures the time or phase difference of the received light to calculate the distance to the target [80,81,82]. However, this method requires highly accurate time measurement modules to achieve sufficient ranging precision, making it relatively expensive. Nevertheless, TOF is able to measure long distances with a minimal ambient light interference. Current research efforts are focused on reducing the yield and cost of time measurement modules while improving algorithm performance. The goal is to lower the cost by improving the manufacturing process of the time measurement module and enhance the ranging performance through algorithm optimization.
Triangulation method—a distance measurement technique based on the principles of triangulation. Unlike other methods that require precise sensors, it has a lower overall cost [83,84,85]. At short distances, the triangulation method can provide high accuracy, making it widely used in consumer and commercial products such as robotic vacuum cleaners. However, the measurement error of the triangulation method is related to the measurement distance. As the measurement distance increases, the measurement error also gradually increases. This is inherent to the principles of triangulation and cannot be completely avoided.
Laser scanning method—an active visual 3D reconstruction method that utilizes the interaction between a laser beam emitted by a laser device and the target surface to obtain the object’s three-dimensional information. This method employs laser projection and laser ranging techniques to capture the position of laser points or lines and calculate their three-dimensional coordinates, enabling accurate 3D reconstruction. Laser scanning offers advantages such as high precision, adaptability to different lighting conditions, and real-time data acquisition, making it suitable for complex shape and detail reconstruction [82]. However, this method has longer scanning times for the large objects, higher equipment costs, and challenges in dealing with transparent, reflective, or multiply scattered surfaces. With further technological advancements, laser scanning holds a vast application potential in engineering, architecture, cultural heritage preservation, and other fields. However, limitations still need to be addressed, including time, cost, and adaptability to special surfaces [86,87,88].

4.1.2. Passive Methods

Figure 7 displays schematic diagrams of several passive methods. Table 2 summarizes relevant literature on passive methods.
Table 2. Passive approaches in the selected papers.
Table 2. Passive approaches in the selected papers.
YearMethodDescriptionReferences
2010Monocular visionPhotometric stereo[89]
2004Monocular visionShape from texture[90]
2000Monocular visionShape from shading[91]
2018Monocular visionDepth from defocus[92]
2003Monocular visionConcentric mosaics[93]
2014Monocular visionBayesian estimation and convex optimization techniques are combined in image processing.[94]
2020Monocular visionDeep learning-based 3D position estimation[95]
2023Binocular/multi-view visionIncreasing the baseline distance between two cameras to improve the accuracy of a binocular vision system.[96]
2018Multi-view visionDeep learning-based multi-view stereo[97]
2020Multi-view visionA new sparse-to-dense coarse-to-fine framework for fast and accurate depth estimation in multi-view stereo (MVS)[98]
2011RGB-D camera-basedKinect Fusion[99]
2019RGB-D camera-basedReFusion[100]
2015RGB-D camera-basedDynamic Fusion[101]
2017RGB-D camera-basedBundle Fusion[102]
Figure 7. Passive depth perception methods are presented. (a) shows the method based on monocular vision [95]. (b) depicts the methods based on binocular/multi-view vision [96]. Reprinted with permission from [95,96].
Figure 7. Passive depth perception methods are presented. (a) shows the method based on monocular vision [95]. (b) depicts the methods based on binocular/multi-view vision [96]. Reprinted with permission from [95,96].
Sensors 23 09700 g007
Monocular vision—a visual depth recovery technique that uses a single camera as the capturing device. It is advantageous due to its low cost and ease of deployment. Monocular vision reconstructs the 3D environment using the disparity in a sequence of continuous images. Monocular vision depth recovery techniques include photometric stereo [89], texture recovery [90], shading recovery [91], defocus recovery [92], and concentric mosaic recovery [93]. These methods utilize variations in lighting, texture patterns, brightness gradients, focus information, and concentric mosaics to infer the depth information of objects. To improve the accuracy and stability of depth estimation, some algorithms [94,95] employ depth regularization and convolutional neural networks for monocular depth estimation. However, using monocular vision for depth estimation and 3D reconstruction has inherent challenges. A single image may correspond to multiple real-world physical scenes, making it difficult to estimate depth and achieve 3D reconstruction solely based on monocular vision methods.
Binocular/Multi-view Vision—an advanced technique based on the principles of stereo geometry. It utilizes the images captured by the left and right cameras, after rectification, to find corresponding pixels and recover the 3D structural information of the environment [96]. However, this method faces the challenge of matching the images from the left and right cameras, as inaccurate matching can significantly affect the final imaging results of the algorithm. To improve the accuracy of matching, multi-view vision introduces a configuration of three or more cameras to further enhance the precision of matching [97]. This method has notable disadvantages, including longer computation time and a poorer real-time performance [98].
RGB-D Camera-Based—in recent years, many researchers have focused on utilizing consumer-grade RGB-D cameras for 3D reconstruction. For example, Microsoft’s Kinect V1 and V2 products have made significant contributions in this area. The Kinect Fusion algorithm, proposed by Izadi et al. [99] in 2011, was a milestone in achieving real-time 3D reconstruction with RGB cameras. Subsequently, algorithms such as Dynamic Fusion [100], ReFusion [101], and Bundle Fusion [102] have emerged, further advancing the field [103]. These algorithms have provided new directions and methods using the RGB-D cameras.

4.2. Deep Learning-Based 3D Reconstruction Algorithms

In the context of deep learning, image-based 3D reconstruction methods leverage large-scale data to establish prior knowledge and transform the problem of 3D reconstruction into an encoding and decoding problem. With the increasing availability of 3D datasets and improvement in computational power, deep learning 3D reconstruction methods can reconstruct the 3D models of objects from single or multiple 2D images without the need for complex camera calibration. This approach utilizes the powerful representation capabilities and data-driven learning approach of deep learning, bringing significant advancements and new possibilities to the field of image 3D reconstruction. Figure 8 illustrates schematic diagrams of several deep learning-based methods.
In 3D reconstruction, there are primarily four types of data formats: (1) The depth map is a two-dimensional image that records the distance from the viewpoint to the object for each pixel. The data is represented as a grayscale image, where darker areas correspond to closer regions. (2) Voxels are like the concept of pixels in 2D and are used to represent volume elements in 3D space. Each voxel can contain 3D coordinate information as well as other properties such as color and reflectance intensity. (3) Point clouds are composed of discrete points, where each point carries 3D coordinates and additional information such as color and reflectance intensity. (4) Meshes are two-dimensional structures composed of polygons and are used to represent the surface of 3D objects. Mesh models have the advantage of convenient computation and can undergo various geometric operations and transformations.
The choice of an appropriate data format depends on the specific requirements and algorithm demands, providing diverse options and application areas in 3D reconstruction. Table 3 summarizes the relevant literature on deep learning-based methods. According to the different forms of processed data, we will briefly explain three types, (1) based on voxels [104,105,106,107,108], (2) based on point clouds [109,110,111,112,113,114,115], and (3) based on meshes [116,117,118,119,120,121,122].
Figure 8. Deep learning methods based on point clouds [112]. Reprinted with permission from [112].
Figure 8. Deep learning methods based on point clouds [112]. Reprinted with permission from [112].
Sensors 23 09700 g008

4.2.1. Voxel-Based 3D Reconstruction

Voxels are an extension of pixels to three-dimensional space and, similar to 2D pixels, voxel representations in 3D space also exhibit a regular structure. It has been demonstrated that various neural network architectures commonly used in the field of 2D image analysis can be easily extended to work for voxel representations. Therefore, when tackling problems related to 3D scene reconstruction and semantic understanding, we can leverage pixel-based representations for research. In this regard, we categorize voxel representations into dense voxel representations, sparse voxel representations, and voxel representations obtained through the conversion of point clouds.

4.2.2. Point Cloud-Based 3D Reconstruction

Traditional deep learning frameworks are built upon 2D convolutional structures, which efficiently handle regularized data structures with the support of modern parallel computing hardware. However, for images lacking depth information, especially under extreme lighting or specific optical conditions, semantic ambiguity often arises. As an extension of 3D data, 3D convolution has emerged to naturally handle regularized voxel data. However, compared to 2D images, the computational resources required for processing voxel representations grow exponentially. Additionally, 3D structures exhibit sparsity, resulting in significant resource waste when using voxel representations. Therefore, voxel representations are no longer suitable for large-scale scene analysis tasks. On the contrary, point clouds, as an irregular representation, can straightforwardly and effectively capture sparse 3D data structures, playing a crucial role in 3D scene understanding tasks. Consequently, point cloud feature extraction has become a vital step in the pipeline of 3D scene analysis and has achieved unprecedented development.

4.2.3. Mesh-Based 3D Reconstruction

Mesh-based 3D reconstruction methods are techniques used for reconstructing three-dimensional shapes. This approach utilizes a mesh structure to describe the geometric shape and topological relationships of objects, enabling the accurate modeling of the objects. In mesh-based 3D reconstruction, the first step is to acquire the surface point cloud data of the object. Then, through a series of operations, the point cloud data is converted into a mesh representation. These operations include mesh topology construction, vertex position adjustment, and boundary smoothing. Finally, by optimizing and refining the mesh, an accurate and smooth 3D object model can be obtained.
Mesh-based 3D reconstruction methods offer several advantages. The mesh structure preserves the shape details of objects, resulting in higher accuracy in the reconstruction results. The adjacency relationships within the mesh provide rich information for further geometric analysis and processing. Additionally, mesh-based methods can be combined with deep learning techniques such as graph convolutional neural networks, enabling advanced 3D shape analysis and understanding.

5. Robotic Welding Sensors in Industrial Applications

The development of robotic welding sensors has been rapid in recent years, and their application in various industries has become increasingly widespread [123,124,125]. These sensors are designed to detect and measure various parameters such as temperature, pressure, speed, and position, which are crucial for ensuring consistent and high-quality welds. The combination of various sensors enables robotic welding machines to better perceive the welding object and control the robot to reach places that are difficult or dangerous for humans to access. As a result, robotic welding machines have been widely applied in various industries, including shipbuilding, automotive, mechanical manufacturing, aerospace, railroad, nuclear, PCB, construction, and medical equipment, due to their ability to improve the efficiency, accuracy, and safety of the welding process. Table 4 summarizes the typical applications of welding robot vision sensors in different fields.
In the shipbuilding and automotive industries, robotic welding vision sensors play a crucial role in ensuring the quality and accuracy of welding processes [126,127,128,129,130,131,132,133]. These sensors are designed to detect various parameters such as the thickness and shape of steel plates, the position and orientation of car parts, and the consistency of welds. By using robotic welding vision sensors, manufacturers can improve the efficiency and accuracy of their welding processes, reduce the need for manual labor, and ensure that their products meet the required safety and quality standards. Figure 9 shows the application of welding robots in shipyards. Figure 10 shows the application of welding robots in automobile factories.
In other fields, robotic welding vision sensors can easily address complex, difficult-to-reach, and hazardous welding scenarios through visual perception [134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149]. By accurately detecting, recognizing, and modeling the object to be welded, the sensors can comprehensively grasp the structure, spatial relationships, and positioning of the object, facilitating the precise control of the welding torch and ensuring optimal welding results. The versatility of robotic welding vision sensors enables them to adapt to various environmental conditions, such as changing lighting conditions, temperatures, and distances. They can also be integrated with other sensors and systems to enhance their performance and functionality.
The use of robotic welding vision sensors offers several advantages over traditional manual inspection methods. Firstly, they can detect defects and inconsistencies in real time, allowing for immediate corrective action to be taken, which reduces the likelihood of defects and improves the overall quality of the welds. Secondly, they can inspect areas that are difficult or impossible for human inspectors to access, such as the inside of pipes or the underside of car bodies, ensuring that all welds meet the required standards, regardless of their location. Furthermore, robotic welding vision sensors can inspect welds at a faster rate than manual inspection methods, allowing for increased productivity and efficiency [150]. They also reduce the need for manual labor, which can be time-consuming and costly. Additionally, the use of robotic welding vision sensors can help to improve worker safety by reducing the need for workers to work in hazardous environments [151].
We have analyzed the experimental results from the literature in actual work environments. In reference [144], the weighted function of the position error in the image space transitioned from 0 to 1, and after active control, the manipulation error was reduced to less than 2 pixels. Reference [147] utilized tool path adaptation and adaptive strategies in a robotic system to compensate for inaccuracies caused by the welding process. Experiments have demonstrated that robotic systems can operate within a certain range of outward angles, in addition to multiple approach angles of up to 50 degrees. This adaptive technique has enhanced the existing structures and repair technologies through incremental spot welding.
In summary, robotic welding vision sensors play a crucial role in assisting robotic welding systems to accurately detect and recognize the objects to be welded, and then guide the welding process to ensure optimal results. These sensors utilize advanced visual technologies such as cameras, lasers, and computer algorithms to detect and analyze the object’s shape, size, material, and other relevant features. They can be integrated into the robotic welding system in various ways, such as mounting them on the robot’s arm or integrating them into the welding torch itself. The sensors provide real-time information to the robotic system, enabling it to adjust welding parameters such as speed, pressure, and heat input to optimize weld quality and consistency [152]. Customized approaches are crucial when applying welding robots across different industries. The automotive, aerospace, and shipbuilding sectors face unique welding challenges that require tailored solutions. Customized robot designs, specialized parameters, and quality control should be considered to ensure industry-specific needs are met.

6. Existing Issues, Proposed Solutions, and Possible Future Work

Visual perception in welding robots encounters a myriad of challenges, encompassing the variability in object appearance, intricate welding processes, restricted visibility, sensor interference, processing limitations, knowledge gaps, and safety considerations. Overcoming these hurdles requires the implementation of cutting-edge sensing and perception technologies, intricate software algorithms, and meticulous system integration. Within the realm of industrial robotics, welding robots grapple with various visual perception challenges. This encompasses current issues, potential solutions, and future prospects within the field of welding robotics.
In the exploration of deep learning and convolutional neural networks (CNN) within the realm of robot welding vision systems, it is crucial to recognize the potential of alternative methodologies and assess their suitability in specific contexts. Beyond deep learning, traditional machine learning algorithms can be efficiently deployed in robot welding vision systems. Support vector machines (SVMs) and random forests, for example, emerge as viable choices for defect classification and detection in welding processes. These algorithms typically showcase a lower computational complexity and have the capacity to exhibit commendable performance on specific datasets.
Rule-based systems can serve as cost-effective and interpretable alternatives for certain welding tasks. Leveraging predefined rules and logical reasoning, these systems process image data to make informed decisions. Traditional computer vision techniques, including thresholding, edge detection, and shape analysis, prove useful for the precise detection of weld seam positions and shapes. Besides CNNs, a multitude of classical computer vision techniques can find applications in robot welding vision systems. For instance, template matching can ensure the accurate identification and localization of weld seams, while optical flow methods facilitate motion detection during the welding process. These techniques often require less annotated data and can demonstrate robustness in specific scenarios. Hybrid models that amalgamate the strengths of different methodologies can provide comprehensive solutions. Integrating traditional computer vision techniques with deep learning allows for the utilization of deep learning-derived features for classification or detection tasks. Such hybrid models prove particularly valuable in environments with limited data availability or high interpretability requirements.
The primary challenges encountered by robotic welding vision systems include the following:
  • Adaptation to changing environmental conditions: robotic welding vision systems often struggle to swiftly adjust to varying lighting, camera angles, and other environmental factors that impact the welding process.
  • Limited detection and recognition capabilities: conventional computer vision techniques used in these systems have restricted abilities to detect and recognize objects, causing errors during welding.
  • Vulnerability to noise and interference: robotic welding vision systems are prone to sensitivity issues concerning noise and interference, stemming from sources such as the welding process, robotic movement, and external factors like dust and smoke.
  • Challenges in depth estimation and 3D reconstruction: variations in material properties and welding techniques contribute to discrepancies in the welding process, leading to difficulties in accurately estimating depth and achieving precise 3D reconstruction.
  • The existing welding setup is intricately interconnected, often space-limited, and the integration of a multimodal sensor fusion system necessitates modifications to accommodate new demands. Effectively handling voluminous data and extracting pertinent information present challenges, requiring preprocessing and fusion algorithms. Integration entails comprehensive system integration and calibration, ensuring seamless hardware and software dialogue for the accuracy and reliability of data.
To tackle these challenges, the following solutions are proposed for consideration:
  • Develop deep learning for object detection and recognition: The integration of deep learning techniques, like convolutional neural networks (CNNs), can significantly enhance the detection and recognition capabilities of robotic welding vision systems. This empowers them to accurately identify objects and adapt to dynamic environmental conditions.
  • Transfer deep learning for welding robot adaptation: leveraging pre-trained deep learning models and customizing them to the specifics of robotic welding enables the vision system to learn and recognize welding-related objects and features, elevating its performance and resilience.
  • Develop multi-modal sensor fusion: The fusion of visual data from cameras with other sensors such as laser radar and ultrasonic sensors creates a more comprehensive understanding of the welding environment. This synthesis improves the accuracy and reliability of the vision system.
  • Integrate models and hardware: Utilizing diverse sensors to gather depth information and integrating this data into a welding-specific model enhances the precision of depth estimation and 3D reconstruction.
  • Perform a comprehensive requirements analysis and system evaluation in collaboration with welding experts to design a multi-modal sensor fusion architecture. Select appropriate algorithms for data extraction and fusion to ensure accurate and reliable results. Conduct data calibration and system integration, including hardware configuration and software interface design. Calibrate the sensors and assess the system performance to ensure stable and reliable welding operations.
Potential future advancements encompass the following:
  • Enhancing robustness in deep learning models: advancing deep learning models to withstand noise and interference will broaden the operational scope of robotic welding vision systems across diverse environmental conditions.
  • Infusing domain knowledge into deep learning models: integrating welding-specific expertise into deep learning models can elevate their performance and adaptability within robotic welding applications.
  • Real-time processing and feedback: developing mechanisms for real-time processing and feedback empowers robotic welding vision systems to promptly respond to welding environment changes, enhancing weld quality and consistency.
  • Autonomous welding systems: integrating deep learning with robotic welding vision systems paves the way for autonomous welding systems capable of executing complex welding tasks without human intervention.
  • Multi-modal fusion for robotic welding: merging visual and acoustic signals with welding process parameters can provide a comprehensive understanding of the welding process, enabling the robotic welding system to make more precise decisions and improve weld quality.
  • Establishing a welding knowledge base: creating a repository of diverse welding methods and materials enables robotic welding systems to learn and enhance their welding performance and adaptability from this knowledge base.

7. Conclusions

The rapid advancement of sensor intelligence and artificial intelligence has ushered in a new era where emerging technologies like deep learning, computer vision, and large language models are making significant inroads across various industries. Among these cutting-edge innovations, welding robot vision perception stands out as a cross-disciplinary technology, seamlessly blending welding, robotics, sensors, and computer vision. This integration offers fresh avenues for achieving the intelligence of welding robots, propelling this field into the forefront of technological progress.
A welding robot with advanced visual perception should have the following characteristics: accurate positioning and detection capabilities, fast response speed and real-time control, the ability to work in complex scenarios, the ability to cope with different welding materials, and a high degree of human–machine collaboration. Specifically, the visual perception system of the welding robot requires highly accurate image processing and positioning capabilities to accurately detect the position and shape of the welded joint. At the same time, the visual perception system needs to have fast image processing and analysis capabilities, which can perceive and judge the welding scene in real time in a short period of time and make correct control and feedback on abnormal situations in time. Actual welding is usually carried out in a complex environment, including interference factors such as lighting changes, smoke, and sparks. A good visually perceptive welding robot should have a strong ability to adapt to the environment and can achieve accurate recognition in complex environments. At the same time, the visual perception system of the welding robot needs to have the ability of multi-material welding and can adapt to the welding needs of different materials. Finally, with the development of smart factories, the visual perception system of welding robots needs to have the ability of human–computer interaction and collaboration.
At present, the most commonly used welding robot vision perception solution is based on the combination of vision sensor and deep learning model, through depth estimation and three-dimensional reconstruction methods to perceive the depth of the welding structure and obtain the three-dimensional information of the welding structure. Deep learning-based approaches typically use models such as convolutional neural networks (CNNS) to learn depth features in images. By training a large amount of image data, these networks learn the relationship between parallax, texture, edge, and other features in the image and depth. Through the image collected by the vision sensor, the depth estimation model can output the depth information of the corresponding spatial position of the image. This depth model may solve the problem that the welding robot needs to be accurately positioned in the space position, so that the attitude and motion trajectory of the welding robot can be controlled.
In conclusion, in the pursuit of research on robot welding vision systems, a balanced consideration of diverse methodologies is essential, with the selection of appropriate methods based on specific task requirements. While deep learning and CNNs wield immense power, their universal applicability is not guaranteed. Emerging or traditional methods may offer more cost-effective or interpretable solutions. Therefore, a comprehensive understanding of the strengths and limitations of different methodologies is imperative, and a holistic approach should be adopted when considering their applications.

Author Contributions

J.W., L.L. and P.X.: conceptualization, methodology, software, formal analysis, writing—original draft preparation, and visualization; L.L. and P.X.: conceptualization, supervision, writing—review and editing; L.L.: writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are available upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Christensen, N.; Davies, V.L.; Gjermundsen, K. Distribution of Temperature in Arc Welding. Brit. Weld. J. 1965, 12, 54–75. [Google Scholar]
  2. Chin, B.A.; Madsen, N.H. Goodling J S. Infrared Thermography for Sensing the Arc Welding Process. Weld. J. 1983, 62, 227s–234s. [Google Scholar]
  3. Soori, M.; Arezoo, B.; Dastres, R. Artificial Intelligence, Machine Learning and Deep Learning in Advanced Robotics, a Review. Cogn. Robot. 2023, 3, 54–70. [Google Scholar] [CrossRef]
  4. Sun, A.; Kannatey-Asibu, E., Jr.; Gartner, M. Sensor Systems for Real-Time Monitoring of Laser Weld Quality. J. Laser Appl. 1999, 11, 153–168. [Google Scholar] [CrossRef]
  5. Vilkas, E.P. Automation of the Gas Tungsten Arc Welding Process. Weld. J. 1966, 45, 410s–416s. [Google Scholar]
  6. Wang, J.; Huang, L.; Yao, J.; Liu, M.; Du, Y.; Zhao, M.; Su, Y.; Lu, D. Weld Seam Tracking and Detection Robot Based on Artificial Intelligence Technology. Sensors 2023, 23, 6725. [Google Scholar] [CrossRef] [PubMed]
  7. Ramsey, P.W.; Chyle, J.J.; Kuhr, J.N.; Myers, P.S.; Weiss, M.; Groth, W. Infrared Temperature Sensing Systems for Automatic Fusion Welding. Weld. J. 1963, 42, 337–346. [Google Scholar]
  8. Chen, S.B.; Lou, Y.J.; Wu, L.; Zhao, D.B. Intelligent Methodology for Sensing, Modeling and Control of Pulsed GTAW: Part 1-Bead-on-Plate Welding. Weld. J. 2000, 79, 151–263. [Google Scholar]
  9. Kim, E.W.; Allem, C.; Eagar, T.W. Visible Light Emissions during Gas Tungsten Arc Welding and Its Application to Weld Image Improvement. Weld. J. 1987, 66, 369–377. [Google Scholar]
  10. Rai, V.K. Temperature Sensors and Optical Sensors. Appl. Phys. B 2007, 88, 297–303. [Google Scholar] [CrossRef]
  11. Romrell, D. Acoustic Emission Weld Monitoring of Nuclear Components. Weld. J. 1973, 52, 81–87. [Google Scholar]
  12. Li, P.; Zhang, Y.-M. Robust Sensing of Arc Length. IEEE Trans. Instrum. Meas. 2001, 50, 697–704. [Google Scholar] [CrossRef]
  13. Lebosse, C.; Renaud, P.; Bayle, B.; de Mathelin, M. Modeling and Evaluation of Low-Cost Force Sensors. IEEE Trans. Robot. 2011, 27, 815–822. [Google Scholar] [CrossRef]
  14. Kurada, S.; Bradley, C. A Review of Machine Vision Sensors for Tool Condition Monitoring. Comput. Ind. 1997, 34, 55–72. [Google Scholar] [CrossRef]
  15. Braggins, D. Oxford Sensor Technology—A Story of Perseverance. Sens. Rev. 1998, 18, 237–241. [Google Scholar] [CrossRef]
  16. Jia, X.; Ma, P.; Tarwa, K.; Wang, Q. Machine Vision-Based Colorimetric Sensor Systems for Food Applications. J. Agric. Food Res. 2023, 11, 100503. [Google Scholar] [CrossRef]
  17. Arafat, M.Y.; Alam, M.M.; Moh, S. Vision-Based Navigation Techniques for Unmanned Aerial Vehicles: Review and Challenges. Drones 2023, 7, 89. [Google Scholar] [CrossRef]
  18. Kah, P.; Shrestha, M.; Hiltunen, E.; Martikainen, J. Robotic Arc Welding Sensors and Programming in Industrial Applications. Int. J. Mech. Mater. Eng. 2015, 10, 13. [Google Scholar] [CrossRef]
  19. Xu, P.; Xu, G.; Tang, X.; Yao, S. A Visual Seam Tracking System for Robotic Arc Welding. Int. J. Adv. Manuf. Technol. 2008, 37, 70–75. [Google Scholar] [CrossRef]
  20. Walk, R.D.; Gibson, E.J. A Comparative and Analytical Study of Visual Depth Perception. Psychol. Monogr. Gen. Appl. 1961, 75, 1–44. [Google Scholar] [CrossRef]
  21. Julesz, B. Binocular Depth Perception without Familiarity Cues. Science 1964, 145, 356–362. [Google Scholar] [CrossRef] [PubMed]
  22. Julesz, B. Binocular Depth Perception of Computer-Generated Patterns. Bell Syst. Tech. J. 1960, 39, 1125–1162. [Google Scholar] [CrossRef]
  23. Cumming, B.; Parker, A. Responses of Primary Visual Cortical Neurons to Binocular Disparity without Depth Perception. Nature 1997, 389, 280–283. [Google Scholar] [CrossRef] [PubMed]
  24. Tyler, C.W. Depth Perception in Disparity Gratings. Nature 1974, 251, 140–142. [Google Scholar] [CrossRef] [PubMed]
  25. Langlands, N.M.S. Experiments on Binocular Vision. Trans. Opt. Soc. 1926, 28, 45. [Google Scholar] [CrossRef]
  26. Livingstone, M.S.; Hubel, D.H. Psychophysical Evidence for Separate Channels for the Perception of Form, Color, Movement, and Depth. J. Neurosci. 1987, 7, 3416–3468. [Google Scholar] [CrossRef] [PubMed]
  27. Wheatstone, C., XVIII. Contributions to the Physiology of Vision.—Part the First. On Some Remarkable, and Hitherto Unobserved, Phenomena of Binocular Vision. Philos. Trans. R. Soc. Lond. 1997, 128, 371–394. [Google Scholar]
  28. Parker, A.J. Binocular Depth Perception and the Cerebral Cortex. Nat. Rev. Neurosci. 2007, 8, 379–391. [Google Scholar] [CrossRef]
  29. Roberts, L. Machine Perception of Three-Dimensional Solids. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 1963. [Google Scholar]
  30. Ban, Y.; Liu, M.; Wu, P.; Yang, B.; Liu, S.; Yin, L.; Zheng, W. Depth Estimation Method for Monocular Camera Defocus Images in Microscopic Scenes. Electronics 2022, 11, 2012. [Google Scholar] [CrossRef]
  31. Luo, R.C.; Kay, M.G. Multisensor Integration and Fusion in Intelligent Systems. IEEE Trans. Syst. Man Cybern. 1989, 19, 901–931. [Google Scholar] [CrossRef]
  32. Attalla, A.; Attalla, O.; Moussa, A.; Shafique, D.; Raean, S.B.; Hegazy, T. Construction Robotics: Review of Intelligent Features. Int. J. Intell. Robot. Appl. 2023, 7, 535–555. [Google Scholar] [CrossRef]
  33. Schlett, T.; Rathgeb, C.; Busch, C. Deep Learning-Based Single Image Face Depth Data Enhancement. Comput. Vis. Image Underst. 2021, 210, 103247. [Google Scholar] [CrossRef]
  34. Bloom, L.C.; Mudd, S.A. Depth of Processing Approach to Face Recognition: A Test of Two Theories. J. Exp. Psychol. Learn. Mem. Cogn. 1991, 17, 556–565. [Google Scholar] [CrossRef]
  35. Abreu de Souza, M.; Alka Cordeiro, D.C.; Oliveira, J.d.; Oliveira, M.F.A.d.; Bonafini, B.L. 3D Multi-Modality Medical Imaging: Combining Anatomical and Infrared Thermal Images for 3D Reconstruction. Sensors 2023, 23, 1610. [Google Scholar] [CrossRef] [PubMed]
  36. Gehrke, S.R.; Phair, C.D.; Russo, B.J.; Smaglik, E.J. Observed Sidewalk Autonomous Delivery Robot Interactions with Pedestrians and Bicyclists. Transp. Res. Interdiscip. Perspect. 2023, 18, 100789. [Google Scholar] [CrossRef]
  37. Yang, Y.; Siau, K.; Xie, W.; Sun, Y. Smart Health: Intelligent Healthcare Systems in the Metaverse, Artificial Intelligence, and Data Science Era. J. Organ. End User Comput. 2022, 34, 1–14. [Google Scholar] [CrossRef]
  38. Singh, A.; Bankiti, V. Surround-View Vision-Based 3d Detection for Autonomous Driving: A Survey. arXiv 2023, arXiv:2302.06650. [Google Scholar]
  39. Korkut, E.H.; Surer, E. Visualization in Virtual Reality: A Systematic Review. Virtual Real. 2023, 27, 1447–1480. [Google Scholar] [CrossRef]
  40. Mirzaei, B.; Nezamabadi-pour, H.; Raoof, A.; Derakhshani, R. Small Object Detection and Tracking: A Comprehensive Review. Sensors 2023, 23, 6887. [Google Scholar] [CrossRef]
  41. Onnasch, L.; Roesler, E. A Taxonomy to Structure and Analyze Human–Robot Interaction. Int. J. Soc. Robot. 2021, 13, 833–849. [Google Scholar] [CrossRef]
  42. Haug, K.; Pritschow, G. Robust Laser-Stripe Sensor for Automated Weld-Seam-Tracking in the Shipbuilding Industry. In Proceedings of the IECON ’98, Aachen, Germany, 31 August–4 September 1998; pp. 1236–1241. [Google Scholar]
  43. Zhang, Z.; Chen, S. Real-Time Seam Penetration Identification in Arc Welding Based on Fusion of Sound, Voltage and Spectrum Signals. J. Intell. Manuf. 2017, 28, 207–218. [Google Scholar] [CrossRef]
  44. Wang, B.; Hu, S.J.; Sun, L.; Freiheit, T. Intelligent Welding System Technologies: State-of-the-Art Review and Perspectives. J. Manuf. Syst. 2020, 56, 373–391. [Google Scholar] [CrossRef]
  45. Zhang, K.; Yan, M.; Huang, T.; Zheng, J.; Li, Z. 3D Reconstruction of Complex Spatial Weld Seam for Autonomous Welding by Laser Structured Light Scanning. J. Manuf. Process. 2019, 39, 200–207. [Google Scholar] [CrossRef]
  46. Yang, L.; Liu, Y.; Peng, J. Advances in Techniques of the Structured Light Sensing in Intelligent Welding Robots: A Review. Int. J. Adv. Manuf. Technol. 2020, 110, 1027–1046. [Google Scholar] [CrossRef]
  47. Lei, T.; Rong, Y.; Wang, H.; Huang, Y.; Li, M. A Review of Vision-Aided Robotic Welding. Comput. Ind. 2020, 123, 103326. [Google Scholar] [CrossRef]
  48. Yang, J.; Wang, C.; Jiang, B.; Song, H.; Meng, Q. Visual Perception Enabled Industry Intelligence: State of the Art, Challenges and Prospects. IEEE Trans. Ind. Inform. 2021, 17, 2204–2219. [Google Scholar] [CrossRef]
  49. Ottoni, A.L.C.; Novo, M.S.; Costa, D.B. Deep Learning for Vision Systems in Construction 4.0: A Systematic Review. Signal Image Video Process. 2023, 17, 1821–1829. [Google Scholar] [CrossRef]
  50. Siores, E. Self Tuning Through-the-Arc Sensing for Robotic M.I.G. Welding. In Control 90: The Fourth Conference on Control Engineering; Control Technology for Australian Industry; Preprints of Papers; Institution of Engineers: Barton, ACT, Australia, 1990; pp. 146–149. [Google Scholar]
  51. Fridenfalk, M.; Bolmsjö, G. Design and Validation of a Universal 6D Seam-Tracking System in Robotic Welding Using Arc Sensing. Adv. Robot. 2004, 18, 1–21. [Google Scholar] [CrossRef]
  52. Lu, B. Basics of Welding Automation; Huazhong Institute of Technology Press: Wuhan, China, 1985. [Google Scholar]
  53. Available online: www.abb.com (accessed on 1 November 2023).
  54. Fujimura, H. Joint Tracking Control Sensor of GMAW: Development of Method and Equipment for Position Sensing in Welding with Electric Arc Signals (Report 1). Trans. Jpn. Weld. Soc. 1987, 18, 32–40. [Google Scholar]
  55. Zhu, B.; Xiong, J. Increasing Deposition Height Stability in Robotic GTA Additive Manufacturing Based on Arc Voltage Sensing and Control. Robot. Comput.-Integr. Manuf. 2020, 65, 101977. [Google Scholar] [CrossRef]
  56. Mao, Y.; Xu, G. A Real-Time Method for Detecting Weld Deviation of Corrugated Plate Fillet Weld by Laser Vision. Sensor Optik 2022, 260, 168786. [Google Scholar] [CrossRef]
  57. Ushio, M.; Mao, W. Sensors for Arc Welding: Advantages and Limitations. Trans. JWRI. 1994, 23, 135–141. [Google Scholar]
  58. Fenn, R. Ultrasonic Monitoring and Control during Arc Welding. Weld. J. 1985, 9, 18–22. [Google Scholar] [CrossRef]
  59. Graham, G.M. On-Line Laser Ultrasonic for Control of Robotic Welding Quality. Ph.D. Thesis, Georgia Institute of Technology, Atlanta, GA, USA, 1995. [Google Scholar]
  60. Abdullah, B.M.; Mason, A.; Al-Shamma’a, A. Defect Detection of the Weld Bead Based on Electromagnetic Sensing. J. Phys. Conf. Ser. 2013, 450, 012039. [Google Scholar] [CrossRef]
  61. You, B.-H.; Kim, J.-W. A Study on an Automatic Seam Tracking System by Using an Electromagnetic Sensor for Sheet Metal Arc Welding of Butt Joints. Proc. Inst. Mech. Eng. Part B J. Eng. Manuf. 2002, 216, 911–920. [Google Scholar] [CrossRef]
  62. Kim, J.-W.; Shin, J.-H. A Study of a Dual-Electromagnetic Sensor System for Weld Seam Tracking of I-Butt Joints. Proc. Inst. Mech. Eng. Part B J. Eng. Manuf. 2003, 217, 1305–1313. [Google Scholar] [CrossRef]
  63. Xu, F.; Xu, Y.; Zhang, H.; Chen, S. Application of Sensing Technology in Intelligent Robotic Arc Welding: A Review. J. Manuf. Process. 2022, 79, 854–880. [Google Scholar] [CrossRef]
  64. Boillot, J.P.; Noruk, J. The Benefits of Laser Vision in Robotic Arc Welding. Weld. J. 2002, 81, 32–34. [Google Scholar]
  65. Zhang, P.; Wang, J.; Zhang, F.; Xu, P.; Li, L.; Li, B. Design and Analysis of Welding Inspection Robot. Sci. Rep. 2022, 12, 22651. [Google Scholar] [CrossRef]
  66. Wexler, M.; Boxtel, J.J.A.v. Depth Perception by the Active Observer. Trends Cogn. Sci. 2005, 9, 431–438. [Google Scholar] [CrossRef]
  67. Wikle, H.C., III; Zee, R.H.; Chin, B.A. A Sensing System for Weld Process Control. J. Mater. Process. Technol. 1999, 89, 254–259. [Google Scholar] [CrossRef]
  68. Rout, A.; Deepak, B.B.V.L.; Biswal, B.B. Advances in Weld Seam Tracking Techniques for Robotic Welding: A Review. Robot Comput.-Integr. Manuf. 2019, 56, 12–37. [Google Scholar] [CrossRef]
  69. Griffin, B.; Florence, V.; Corso, J. Video Object Segmentation-Based Visual Servo Control and Object Depth Estimation on A Mobile Robot. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 4–8 January 2022; pp. 1647–1657. [Google Scholar]
  70. Jiang, S.; Wang, S.; Yi, Z.; Zhang, M.; Lv, X. Autonomous Navigation System of Greenhouse Mobile Robot Based on 3D Lidar and 2D Lidar SLAM. Front. Plant Sci. 2022, 13, 815218. [Google Scholar] [CrossRef] [PubMed]
  71. Nomura, K.; Fukushima, K.; Matsumura, T.; Asai, S. Burn-through Prediction and Weld Depth Estimation by Deep Learning Model Monitoring the Molten Pool in Gas Metal Arc Welding with Gap Fluctuation. J. Manuf. Process. 2021, 61, 590–600. [Google Scholar] [CrossRef]
  72. Garcia, F.; Aouada, D.; Abdella, H.K.; Solignac, T.; Mirbach, B.; Ottersten, B. Depth Enhancement by Fusion for Passive and Active Sensing. In Proceedings of the European Conference on Computer Vision (ECCV), Florence, Italy, 7–13 October 2012; pp. 506–515. [Google Scholar]
  73. Yang, A.; Scott, G.J. Efficient Passive Sensing Monocular Relative Depth Estimation. In Proceedings of the 2019 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), Washington, DC, USA, 15–17 October 2019; pp. 1–9. [Google Scholar]
  74. Li, Q.; Biswas, M.; Pickering, M.R.; Frater, M.R. Accurate Depth Estimation Using Structured Light and Passive Stereo Disparity Estimation. In Proceedings of the 2011 18th IEEE International Conference on Image Processing, Brussels, Belgium, 11–14 September 2011; pp. 969–972. [Google Scholar]
  75. Cai, Z.; Liu, X.; Pedrini, G.; Osten, W.; Peng, X. Accurate Depth Estimation in Structured Light Fields. Opt. Express. 2019, 27, 13532–13546. [Google Scholar] [CrossRef] [PubMed]
  76. Nguyen, T.T.; Slaughter, D.C.; Max, N.; Maloof, J.N.; Sinha, N. Structured Light-Based 3D Reconstruction System for Plants. Sensors 2015, 15, 18587–18612. [Google Scholar] [CrossRef] [PubMed]
  77. Xiang, S.; Liu, L.; Deng, H.; Wu, J.; Yang, Y.; Yu, L. Fast Depth Estimation with Cost Minimization for Structured Light Field. Opt. Express. 2021, 29, 30077–30093. [Google Scholar] [CrossRef]
  78. Jia, T.; Li, X.; Yang, X.; Lin, S.; Liu, Y.; Chen, D. Adaptive Stereo: Depth Estimation from Adaptive Structured Light. Opt. Laser Technol. 2024, 169, 110076. [Google Scholar] [CrossRef]
  79. Zhu, X.; Han, Z.; Zhang, Z.; Song, L.; Wang, H.; Guo, Q. PCTNet: Depth Estimation from Single Structured Light Image with a Parallel CNN-Transformer Network. Meas. Sci. Technol. 2023, 34, 085402. [Google Scholar] [CrossRef]
  80. Li, Y.; Liu, X.; Dong, W.; Zhou, H.; Bao, H.; Zhang, G.; Zhang, Y.; Cui, Z. DELTAR: Depth Estimation from a Light-Weight ToF Sensor and RGB Image. In Proceedings of the European Conference on Computer Vision (ECCV), Tel Aviv, Israel, 23–27 October 2022; pp. 619–636. [Google Scholar]
  81. Fang, Y.; Wang, X.; Sun, Z.; Zhang, K.; Su, B. Study of the Depth Accuracy and Entropy Characteristics of a ToF Camera with Coupled Noise. Opt. Lasers Eng. 2020, 128, 106001. [Google Scholar] [CrossRef]
  82. Alenyà, G.; Foix, S.; Torras, C. ToF Cameras for Active Vision in Robotics. Sens. Actuators Phys. 2014, 218, 10–22. [Google Scholar] [CrossRef]
  83. Davis, J.; Ramamoorthi, R.; Rusinkiewicz, S. Spacetime Stereo: A Unifying Framework for Depth from Triangulation. In Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Madison, WI, USA, 18–20 June 2003; pp. II–359. [Google Scholar]
  84. Kos, M.; Arko, E.; Kosler, H.; Jezeršek, M. Penetration-Depth Control in a Remote Laser-Welding System Based on an Optical Triangulation Loop. Opt. Lasers Eng. 2021, 139, 106464. [Google Scholar] [CrossRef]
  85. Wu, C.; Chen, B.; Ye, C. Detecting Defects on Corrugated Plate Surfaces Using a Differential Laser Triangulation Method. Opt. Lasers Eng. 2020, 129, 106064. [Google Scholar] [CrossRef]
  86. Liao, Y.; Huang, L.; Wang, Y.; Kodagoda, S.; Yu, Y.; Liu, Y. Parse Geometry from a Line: Monocular Depth Estimation with Partial Laser Observation. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 5059–5066. [Google Scholar]
  87. Jeon, I.; Yang, L.; Ryu, K.; Sohn, H. Online Melt Pool Depth Estimation during Directed Energy Deposition Using Coaxial Infrared Camera, Laser Line Scanner, and Artificial Neural Network. Addit. Manuf. 2021, 47, 102295. [Google Scholar] [CrossRef]
  88. Shehata, H.M.; Mohamed, Y.S.; Abdellatif, M.; Awad, T.H. Depth Estimation of Steel Cracks Using Laser and Image Processing Techniques. Alex. Eng. J. 2018, 57, 2713–2718. [Google Scholar] [CrossRef]
  89. Vogiatzis, G.; Hernández, C. Practical 3D Reconstruction Based on Photometric Stereo. In Computer Vision: Detection, Recognition and Reconstruction; Cipolla, R., Battiato, S., Farinella, G.M., Eds.; Studies in Computational Intelligence; Springer: Berlin/Heidelberg, Germany, 2010; pp. 313–345. [Google Scholar]
  90. Yemez, Y.; Schmitt, F. 3D Reconstruction of Real Objects with High Resolution Shape and Texture. Image Vis. Comput. 2004, 22, 1137–1153. [Google Scholar] [CrossRef]
  91. Quartucci Forster, C.H.; Tozzi, C.L. Towards 3D Reconstruction of Endoscope Images Using Shape from Shading. In Proceedings of the 13th Brazilian Symposium on Computer Graphics and Image Processing (Cat. No.PR00878), Gramado, Brazil, 17–20 October 2000; pp. 90–96. [Google Scholar]
  92. Carvalho, M.; Le Saux, B.; Trouve-Peloux, P.; Almansa, A.; Champagnat, F. Deep Depth from Defocus: How Can Defocus Blur Improve 3D Estimation Using Dense Neural Networks? In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018. [Google Scholar]
  93. Feldmann, I.; Kauff, P.; Eisert, P. Image Cube Trajectory Analysis for 3D Reconstruction of Concentric Mosaics. In Proceedings of the VMV, Munich, Germany, 19–21 November 2003; pp. 569–576. [Google Scholar]
  94. Pizzoli, M.; Forster, C.; Scaramuzza, D. REMODE: Probabilistic, Monocular Dense Reconstruction in Real Time. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 2609–2616. [Google Scholar]
  95. Liu, J.; Li, Q.; Cao, R.; Tang, W.; Qiu, G. A Contextual Conditional Random Field Network for Monocular Depth Estimation. Image Vision Comput. 2020, 98, 103922. [Google Scholar] [CrossRef]
  96. Liu, X.; Yang, L.; Chu, X.; Zhou, L. A Novel Phase Unwrapping Method for Binocular Structured Light 3D Reconstruction Based on Deep Learning. Optik 2023, 279, 170727. [Google Scholar] [CrossRef]
  97. Yao, Y.; Luo, Z.; Li, S.; Fang, T.; Quan, L. MVSNet: Depth Inference for Unstructured Multi-View Stereo. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 767–783. [Google Scholar]
  98. Yu, Z.; Gao, S. Fast-MVSNet: Sparse-to-Dense Multi-View Stereo with Learned Propagation and Gauss-Newton Refinement. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Virtual, 14–19 June 2020; pp. 1949–1958. [Google Scholar]
  99. Izadi, S.; Kim, D.; Hilliges, O.; Molyneaux, D.; Newcombe, R.; Kohli, P.; Shotton, J.; Hodges, S.; Freeman, D.; Davison, A.; et al. KinectFusion: Real-Time 3D Reconstruction and Interaction Using a Moving Depth Camera. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, New York, NY, USA, 16 October 2011; pp. 559–568. [Google Scholar]
  100. Palazzolo, E.; Behley, J.; Lottes, P.; Giguère, P.; Stachniss, C. ReFusion: 3D Reconstruction in Dynamic Environments for RGB-D Cameras Exploiting Residuals. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 7855–7862. [Google Scholar]
  101. Newcombe, R.A.; Fox, D.; Seitz, S.M. DynamicFusion: Reconstruction and Tracking of Non-Rigid Scenes in Real-Time. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 8–10 June 2015; pp. 580–587. [Google Scholar]
  102. Dai, A.; Nießner, M.; Zollhöfer, M.; Izadi, S.; Theobalt, C. BundleFusion: Real-Time Globally Consistent 3D Reconstruction Using On-the-Fly Surface Reintegration. ACM Trans. Graph. 2017, 36, 76a. [Google Scholar] [CrossRef]
  103. Zollhöfer, M.; Stotko, P.; Görlitz, A.; Theobalt, C.; Nießner, M.; Klein, R.; Kolb, A. State of the Art on 3D Reconstruction with RGB-D Cameras. Comput. Graph. Forum. 2018, 37, 625–652. [Google Scholar] [CrossRef]
  104. Eigen, D.; Puhrsch, C.; Fergus, R. Depth Map Prediction from a Single Image Using a Multi-Scale Deep Network. Adv. Neural Inf. Process. Syst. 2014, 27, 2366–2374. [Google Scholar]
  105. Wu, Z.; Song, S.; Khosla, A.; Yu, F.; Zhang, L.; Tang, X.; Xiao, J. 3D ShapeNets: A Deep Representation for Volumetric Shapes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 8–10 June 2015; pp. 1912–1920. [Google Scholar]
  106. Choy, C.B.; Xu, D.; Gwak, J.; Chen, K.; Savarese, S. 3D-R2N2: A Unified Approach for Single and Multi-View 3D Object Reconstruction. In Proceedings of the European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 11–14 October 2016; pp. 628–644. [Google Scholar]
  107. Girdhar, R.; Fouhey, D.F.; Rodriguez, M.; Gupta, A. Learning a Predictable and Generative Vector Representation for Objects. In Proceedings of the European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 11–14 October 2016; pp. 484–499. [Google Scholar]
  108. Yan, X.; Yang, J.; Yumer, E.; Guo, Y.; Lee, H. Perspective Transformer Nets: Learning Single-View 3D Object Reconstruction without 3D Supervision. In Proceedings of the NIPS’16: Proceedings of the 30th International Conference on Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; pp. 1696–1704. [Google Scholar]
  109. Fan, H.; Su, H.; Guibas, L.J. A Point Set Generation Network for 3D Object Reconstruction from a Single Image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 22–25 July 2017; pp. 605–613. [Google Scholar]
  110. Lin, C.-H.; Kong, C.; Lucey, S. Learning Efficient Point Cloud Generation for Dense 3D Object Reconstruction. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; p. 32. [Google Scholar]
  111. Chen, R.; Han, S.; Xu, J.; Su, H. Point-Based Multi-View Stereo Network. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1538–1547. [Google Scholar]
  112. Wang, Y.; Ran, T.; Liang, Y.; Zheng, G. An attention-based and deep sparse priori cascade multi-view stereo network for 3D reconstruction. Comput. Graph. 2023, 116, 383–392. [Google Scholar] [CrossRef]
  113. Chen, J.; Kira, Z.; Cho, Y.K. Deep Learning Approach to Point Cloud Scene Understanding for Automated Scan to 3D Reconstruction. J. Comput. Civ. Eng. 2019, 33, 04019027. [Google Scholar] [CrossRef]
  114. Mandikal, P.; Navaneet, K.L.; Agarwal, M.; Babu, R.V. 3D-LMNet: Latent Embedding Matching for Accurate and Diverse 3D Point Cloud Reconstruction from a Single Image. arXiv 2018, arXiv:1807.07796. [Google Scholar]
  115. Ren, S.; Hou, J.; Chen, X.; He, Y.; Wang, W. GeoUDF: Surface Reconstruction from 3D Point Clouds via Geometry-Guided Distance Representation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 2–6 October 2023; pp. 14214–14224. [Google Scholar]
  116. Kato, H.; Ushiku, Y.; Harada, T. Neural 3D Mesh Renderer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, Utah, 18–22 June 2018; pp. 3907–3916. [Google Scholar]
  117. Piazza, E.; Romanoni, A.; Matteucci, M. Real-Time CPU-Based Large-Scale Three-Dimensional Mesh Reconstruction. IEEE Robot. Autom. Lett. 2018, 3, 1584–1591. [Google Scholar] [CrossRef]
  118. Pan, J.; Han, X.; Chen, W.; Tang, J.; Jia, K. Deep Mesh Reconstruction from Single RGB Images via Topology Modification Networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 9964–9973. [Google Scholar]
  119. Choi, H.; Moon, G.; Lee, K.M. Pose2Mesh: Graph Convolutional Network for 3D Human Pose and Mesh Recovery from a 2D Human Pose. In Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, UK, 23–28 August 2020; pp. 769–787. [Google Scholar]
  120. Henderson, P.; Ferrari, V. Learning Single-Image 3D Reconstruction by Generative Modelling of Shape, Pose and Shading. Int. J. Comput. Vis. 2020, 128, 835–854. [Google Scholar] [CrossRef]
  121. Wang, N.; Zhang, Y.; Li, Z.; Fu, Y.; Yu, H.; Liu, W.; Xue, X.; Jiang, Y.-G. Pixel2Mesh: 3D Mesh Model Generation via Image Guided Deformation. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 3600–3613. [Google Scholar] [CrossRef] [PubMed]
  122. Wei, X.; Chen, Z.; Fu, Y.; Cui, Z.; Zhang, Y. Deep Hybrid Self-Prior for Full 3D Mesh Generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Virtual, 10 March 2021; pp. 5805–5814. [Google Scholar]
  123. Majeed, T.; Wahid, M.A.; Ali, F. Applications of Robotics in Welding. Int. J. Emerg. Res. Manag. Technol. 2018, 7, 30–36. [Google Scholar] [CrossRef]
  124. Eren, B.; Demir, M.H.; Mistikoglu, S. Recent Developments in Computer Vision and Artificial Intelligence Aided Intelligent Robotic Welding Applications. Int. J. Adv. Manuf. Technol. 2023, 126, 4763–4809. [Google Scholar] [CrossRef]
  125. Lin, T.; Chen, H.B.; Li, W.H.; Chen, S.B. Intelligent Methodology for Sensing, Modeling, and Control of Weld Penetration in Robotic Welding System. Ind. Robot Int. J. 2009, 36, 585–593. [Google Scholar] [CrossRef]
  126. Jones, J.E.; Rhoades, V.L.; Beard, J.; Arner, R.M.; Dydo, J.R.; Fast, K.; Bryant, A.; Gaffney, J.H. Development of a Collaborative Robot (COBOT) for Increased Welding Productivity and Quality in the Shipyard. In Proceedings of the SNAME Maritime Convention, Providence, RI, USA, 4–6 November 2015; p. D011S001R005. [Google Scholar]
  127. Ang, M.H.; Lin, W.; Lim, S. A Walk-through Programmed Robot for Welding in Shipyards. Ind. Robot Int. J. 1999, 26, 377–388. [Google Scholar] [CrossRef]
  128. Ferreira, L.A.; Figueira, Y.L.; Iglesias, I.F.; Souto, M.Á. Offline CAD-Based Robot Programming and Welding Parametrization of a Flexible and Adaptive Robotic Cell Using Enriched CAD/CAM System for Shipbuilding. Procedia Manuf. 2017, 11, 215–223. [Google Scholar] [CrossRef]
  129. Lee, D. Robots in the Shipbuilding Industry. Robot. Comput.-Integr. Manuf. 2014, 30, 442–450. [Google Scholar] [CrossRef]
  130. Pellegrinelli, S.; Pedrocchi, N.; Tosatti, L.M.; Fischer, A.; Tolio, T. Multi-Robot Spot-Welding Cells for Car-Body Assembly: Design and Motion Planning. Robot. Comput.-Integr. Manuf. 2017, 44, 97–116. [Google Scholar] [CrossRef]
  131. Walz, D.; Werz, M.; Weihe, S. A New Concept for Producing High Strength Aluminum Line-Joints in Car Body Assembly by a Robot Guided Friction Stir Welding Gun. In Advances in Automotive Production Technology–Theory and Application: Stuttgart Conference on Automotive Production (SCAP2020); Springer: Berlin/Heidelberg, Germany, 2021; pp. 361–368. [Google Scholar]
  132. Chai, X.; Zhang, N.; He, L.; Li, Q.; Ye, W. Kinematic Sensitivity Analysis and Dimensional Synthesis of a Redundantly Actuated Parallel Robot for Friction Stir Welding. Chin. J. Mech. Eng. 2020, 33, 1. [Google Scholar] [CrossRef]
  133. Liu, Z.; Bu, W.; Tan, J. Motion Navigation for Arc Welding Robots Based on Feature Mapping in a Simulation Environment. Robot. Comput.-Integr. Manuf. 2010, 26, 137–144. [Google Scholar] [CrossRef]
  134. Jin, Z.; Li, H.; Zhang, C.; Wang, Q.; Gao, H. Online Welding Path Detection in Automatic Tube-to-Tubesheet Welding Using Passive Vision. Int. J. Adv. Manuf. Technol. 2017, 90, 3075–3084. [Google Scholar] [CrossRef]
  135. Yao, T.; Gai, Y.; Liu, H. Development of a robot system for pipe welding. In Proceedings of the 2010 International Conference on Measuring Technology and Mechatronics Automation, Changsha, China, 13–14 March 2010; pp. 1109–1112. [Google Scholar]
  136. Luo, H.; Zhao, F.; Guo, S.; Yu, C.; Liu, G.; Wu, T. Mechanical Performance Research of Friction Stir Welding Robot for Aerospace Applications. Int. J. Adv. Robot. Syst. 2021, 18, 1729881421996543. [Google Scholar] [CrossRef]
  137. Haitao, L.; Tingke, W.; Jia, F.; Fengqun, Z. Analysis of Typical Working Conditions and Experimental Research of Friction Stir Welding Robot for Aerospace Applications. Proc. Inst. Mech. Eng. Part C J. Mech. Eng. Sci. 2021, 235, 1045–1056. [Google Scholar] [CrossRef]
  138. Bres, A.; Monsarrat, B.; Dubourg, L.; Birglen, L.; Perron, C.; Jahazi, M.; Baron, L. Simulation of robotic friction stir welding of aerospace components. Ind. Robot: Int. J. 2010, 37, 36–50. [Google Scholar] [CrossRef]
  139. Caggiano, A.; Nele, L.; Sarno, E.; Teti, R. 3D Digital Reconfiguration of an Automated Welding System for a Railway Manufacturing Application. Procedia CIRP 2014, 25, 39–45. [Google Scholar] [CrossRef]
  140. Wu, W.; Kong, L.; Liu, W.; Zhang, C. Laser Sensor Weld Beads Recognition and Reconstruction for Rail Weld Beads Grinding Robot. In Proceedings of the 2017 5th International Conference on Mechanical, Automotive and Materials Engineering (CMAME), Guangzhou, China, 1–3 August 2017; pp. 143–148. [Google Scholar]
  141. Kochan, A. Automating the construction of railway carriages. Ind. Robot. 2000, 27, 108–110. [Google Scholar] [CrossRef]
  142. Luo, Y.; Tao, J.; Sun, Q.; Deng, Z. A New Underwater Robot for Crack Welding in Nuclear Power Plants. In Proceedings of the 2018 IEEE International Conference on Robotics and Biomimetics (ROBIO), Kuala Lumpur, Malaysia, 12–15 December 2018; pp. 77–82. [Google Scholar]
  143. French, R.; Marin-Reyes, H.; Benakis, M. Advanced Real-Time Weld Monitoring Evaluation Demonstrated with Comparisons of Manual and Robotic TIG Welding Used in Critical Nuclear Industry Fabrication. In Advances in Ergonomics of Manufacturing: Managing the Enterprise of the Future: Proceedings of the AHFE 2017 International Conference on Human Aspects of Advanced Manufacturing, Los Angeles, California, USA, 17–21 July 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 3–13. [Google Scholar]
  144. Gao, Y.; Lin, J.; Chen, Z.; Fang, M.; Li, X.; Liu, Y.H. Deep-Learning Based Robotic Manipulation of Flexible PCBs. In Proceedings of the 2020 IEEE International Conference on Real-Time Computing and Robotics (IEEE RCAR 2020), Hokkaido, Japan, 28–29 September 2020; IEEE: Washington, DC, USA, 2020; pp. 164–170. [Google Scholar]
  145. Liu, F.; Shang, W.; Chen, X.; Wang, Y.; Kong, X. Using Deep Reinforcement Learning to Guide PCBS Welding Robot to Solve Multi-Objective Optimization Tasks. In Proceedings of the Third International Conference on Advanced Algorithms and Neural Networks (AANN 2023), Qingdao, China, 5–7 May 2023; SPIE: Bellingham, WA, USA, 2023; Volume 12791, pp. 560–566. [Google Scholar]
  146. Nagata, M.; Baba, N.; Tachikawa, H.; Shimizu, I.; Aoki, T. Steel Frame Welding Robot Systems and Their Application at the Construction Site. Comput.-Aided Civ. Infrastruct. Eng. 1997, 12, 15–30. [Google Scholar] [CrossRef]
  147. Heimig, T.; Kerber, E.; Stumm, S.; Mann, S.; Reisgen, U.; Brell-Cokcan, S. Towards Robotic Steel Construction through Adaptive Incremental Point Welding. Constr. Robot. 2020, 4, 49–60. [Google Scholar] [CrossRef]
  148. Prusak, Z.; Tadeusiewicz, R.; Jastrzębski, R.; Jastrzębska, I. Advances and Perspectives in Using Medical Informatics for Steering Surgical Robots in Welding and Training of Welders Applying Long-Distance Communication Links. Weld. Technol. Rev. 2020, 92, 37–49. [Google Scholar] [CrossRef]
  149. Wu, Y.; Yang, M.; Zhang, J. Open-Closed-Loop Iterative Learning Control with the System Correction Term for the Human Soft Tissue Welding Robot in Medicine. Math. Probl. Eng. 2020, 2020, 1–9. [Google Scholar] [CrossRef]
  150. Hatwig, J.; Reinhart, G.; Zaeh, M.F. Automated Task Planning for Industrial Robots and Laser Scanners for Remote Laser Beam Welding and Cutting. Prod. Eng. 2010, 4, 327–332. [Google Scholar] [CrossRef]
  151. Lu, X.; Liu, W.; Wu, Y. Review of Sensors and It’s Applications in the Welding Robot. In Proceedings of the 2014 International Conference on Robotic Welding, Intelligence and Automation (RWIA’2014), Shanghai, China, 25 October 2014; pp. 337–349. [Google Scholar]
  152. Shah, H.N.M.; Sulaiman, M.; Shukor, A.Z.; Jamaluddin, M.H.; Rashid, M.Z.A. A Review Paper on Vision Based Identification, Detection and Tracking of Weld Seams Path in Welding Robot Environment. Mod. Appl. Sci. 2016, 10, 83–89. [Google Scholar] [CrossRef]
Figure 1. A classification of depth perception for welding robots.
Figure 1. A classification of depth perception for welding robots.
Sensors 23 09700 g001
Figure 2. Top ten fields and the number of papers in each field. The number of retrieved papers was 2662.
Figure 2. Top ten fields and the number of papers in each field. The number of retrieved papers was 2662.
Sensors 23 09700 g002
Figure 3. (a) A typical laser vison sensor setup for arc welding process; (b) a video camera as a vision sensor; (c) a vision sensor with multiple lenses.
Figure 3. (a) A typical laser vison sensor setup for arc welding process; (b) a video camera as a vision sensor; (c) a vision sensor with multiple lenses.
Sensors 23 09700 g003
Figure 4. A classification of deep computation, which can be broadly divided into traditional methods and deep learning methods, is shown.
Figure 4. A classification of deep computation, which can be broadly divided into traditional methods and deep learning methods, is shown.
Sensors 23 09700 g004
Figure 5. A schematic of the processing sequence of welding robot vision perception. The welding robot obtains the welding images from the vision sensor, processes various welding information through the neural network, and then evaluates and feeds back to correct the welding operation and improves the accuracy.
Figure 5. A schematic of the processing sequence of welding robot vision perception. The welding robot obtains the welding images from the vision sensor, processes various welding information through the neural network, and then evaluates and feeds back to correct the welding operation and improves the accuracy.
Sensors 23 09700 g005
Figure 9. A super flexible shipbuilding welding robot unit with 9 degrees of freedom [128]. Reprinted with permission from [128].
Figure 9. A super flexible shipbuilding welding robot unit with 9 degrees of freedom [128]. Reprinted with permission from [128].
Sensors 23 09700 g009
Figure 10. Welding robot for automobile door production [133]. Reprinted with permission from [133].
Figure 10. Welding robot for automobile door production [133]. Reprinted with permission from [133].
Sensors 23 09700 g010
Table 3. Approaches based on deep learning in the selected papers.
Table 3. Approaches based on deep learning in the selected papers.
YearMethodDescriptionReferences
2014VoxelA supervised coarse-to-fine deep learning network is proposed, consisting of two networks, for depth estimation.[104]
2015VoxelA method is proposed to represent geometric 3D shapes as a probabilistic distribution of binary variables in a 3D voxel grid.[105]
2016VoxelThe proposed 3D-R2N2 model utilizes an Encoder-3DLSTM-Decoder network architecture to establish a mapping from 2D images to 3D voxel models, enabling voxel-based single-view/multi-view 3D reconstruction.[106]
2016VoxelPredicting voxels from 2D images and performing 3D model retrieval becomes feasible.[107]
2016VoxelA novel encoder–decoder network is proposed, which incorporates a new projection loss defined by projection transformations.[108]
2017Point cloudExploring 3D geometric generation networks based on point cloud representations.[109]
2018Point cloudA novel 3D generation model framework is proposed to effectively generate target shapes in the form of dense point clouds.[110]
2019Point cloudA novel point cloud-based multi-view stereo network is proposed, which directly processes the target scene as a point cloud. This approach provides a more efficient representation, especially in high-resolution scenarios.[111]
2023Point cloudAn attention-based deep sparse prior cascade multi-view stereo network is proposed for 3D reconstruction.[112]
2019Point cloudThis study proposes the use of a data-driven deep learning framework to automatically detect and classify building elements from point cloud scenes obtained through laser scanning.[113]
2019Point cloudThree-dimenional LMNet is proposed as a latent embedding matching method for 3D reconstruction.[114]
2023Point cloudA learning-based method called GeoUDF is proposed to address the long-standing and challenging problem of reconstructing discrete surfaces from sparse point clouds.[115]
2018MeshUsing 2D supervision to perform gradient-based 3D mesh editing operations.[116]
2018MeshThe state-of-the-art incremental manifold mesh algorithm proposed by Litvinov and Lhuillier has been improved and extended by Romanoni and Matteucci.[117]
2019MeshA passive translation-based method is proposed for single-view mesh reconstruction, which can generate high-quality meshes with complex topological structures from a single template mesh with zero genus.[118]
2020MeshPose2Mesh is proposed as a novel system based on graph convolutional neural networks, which can directly estimate the 3D coordinates of human body mesh vertices from 2D human pose estimation.[119]
2020MeshBy employing different mesh parameterizations, we can incorporate useful modeling priors such as smoothness or composition from primitives.[120]
2021MeshA novel end-to-end deep learning architecture is proposed that generates 3D shapes from a single color image. The architecture represents the 3D mesh in graph neural networks and generates accurate geometries using progressively deforming ellipsoids.[121]
2021MeshA deep learning method based on network self-priors is proposed to recover complete 3D models consisting of triangulated meshes and texture maps from colored 3D point clouds.[122]
Table 4. Research on sensor technologies for welding robots in different industrial fields.
Table 4. Research on sensor technologies for welding robots in different industrial fields.
YearAreaKey TechnologyDescriptionReferences
2015ShipyardHuman–robot interaction mobile welding robotHuman–machine interaction mobile welding robots successfully remotely produced welds.[126]
1999ShipyardShip welding robot systemA ship welding robot system was developed for welding process technology.[127]
2017ShipyardSuper flexible welding robotA super flexible welding robot module with 9 degrees of freedom was developed.[128]
2014ShipyardWelding vehicle and six-axis robotic armA new type of welding robot system was developed.[129]
2017AutomobileMulti-robot welding systemAn extended formulation of the design and motion planning problems for a multi-robot welding system was proposed.[130]
2021AutomobileRobot-guided friction stir welding gunA new type of robot-guided friction stir welding gun technology was developed.[131]
2020AutomobileFriction welding robotA redundant 2UPR-2RPU parallel robotic system for friction stir welding was proposed.[132]
2010AutomobileArc welding robotA motion navigation method based on feature mapping in a simulated environment was proposed. The method includes initial position guidance and weld seam tracking.[133]
2017MachineryVisual system calibration programA visual system’s calibration program was proposed and the position relationship between the camera and the robot was obtained.[134]
2010MachineryRobot system for welding seawater desalination pipesA robotic system for welding and cutting seawater desalination pipes was introduced.[135]
2021 AerospaceAerospace friction stir welding robotBy analyzing the system composition and configuration of the robot, the loading conditions of the robot’s arm during the welding process were accurately simulated, and the simulation results were used for strength and fatigue checks.[136]
2021 AerospaceNew type of friction stir welding robotAn iterative closest point algorithm was used to plan the welding trajectory for the most complex petal welding conditions.[137]
2010 AerospaceIndustrial robotUsing industrial robots for the friction stir welding (FSW) of metal structures, with a focus on the assembly of aircraft parts made of aluminum alloy.[138]
2014RailwayIndustrial robotThe system was developed and implemented based on a three-axis motion device and a visual system composed of a camera, a laser head, and a band-pass filter.[139]
2017RailwayRail welding path grinding robotA method for measuring and reconstructing a steel rail welding model was proposed.[140]
2000RailwayIndustrial robotAutomation in welding production for manufacturing railroad car bodies was introduced, involving friction stir welding, laser welding, and other advanced welding techniques.[141]
2018Nuclear New type of underwater welding robotAn underwater robot for the underwater welding of cracks in nuclear power plants and other underwater scenarios was developed.[142]
2017Nuclear Robot TIG weldingManual and robotic TIG welding used in key nuclear industry manufacturing was compared.[143]
2020PCBFlexible PCB welding robotA deep learning-based automatic welding operation scheme for flexible PCBs was proposed.[144]
2023PCBSoldering robotThe optimized PCB welding sequence was crucial for improving the welding speed and safety of robots.[145]
1997ConstructionSteel frame structure welding robotTwo welding robot systems were developed to rationalize the welding of steel frame structures.[146]
2020ConstructionSteel frame structure welding robotThe adaptive tool path of the robot system enabled the robot to generate welds at complex approach angles, thereby increasing the potential of the process.[147]
2020Medical equipment Surgical robot performing remote weldingThe various challenges of using surgical robots equipped with digital cameras for remote welding, used to observe welding areas, especially the difficulty of detecting weld pool boundaries, were described.[148]
2020Medical equipmentIntelligent welding system for human soft tissueBy combining manual welding machines with automatic welding systems, intelligent welding systems for human soft tissue welding could be developed in medicine.[149]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, J.; Li, L.; Xu, P. Visual Sensing and Depth Perception for Welding Robots and Their Industrial Applications. Sensors 2023, 23, 9700. https://doi.org/10.3390/s23249700

AMA Style

Wang J, Li L, Xu P. Visual Sensing and Depth Perception for Welding Robots and Their Industrial Applications. Sensors. 2023; 23(24):9700. https://doi.org/10.3390/s23249700

Chicago/Turabian Style

Wang, Ji, Leijun Li, and Peiquan Xu. 2023. "Visual Sensing and Depth Perception for Welding Robots and Their Industrial Applications" Sensors 23, no. 24: 9700. https://doi.org/10.3390/s23249700

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop