Next Article in Journal
A Soft Start Method for Doubly Fed Induction Machines Based on Synchronization with the Power System at Standstill Conditions
Previous Article in Journal
Review of Automated Operations in Drilling and Mining
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cross-Shaped Peg-in-Hole Autonomous Assembly System via BP Neural Network Based on Force/Moment and Visual Information

School of Mechanical Engineering, Yanshan University, Qinhuangdao 066004, China
*
Author to whom correspondence should be addressed.
Machines 2024, 12(12), 846; https://doi.org/10.3390/machines12120846
Submission received: 9 October 2024 / Revised: 18 November 2024 / Accepted: 24 November 2024 / Published: 25 November 2024
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)

Abstract

:
Currently, research on peg-in-hole (PiH) compliant assembly is predominantly limited to circular pegs and holes, with insufficient exploration of various complex-shaped PiH tasks. Furthermore, the degree of freedom for rotation about the axis of the circular peg cannot be constrained after assembly, and few studies have covered the complete process from autonomous hole-searching to insertion. Based on the above problems, a novel cross-shaped peg and hole design has been devised. The center coordinates of the cross-hole are obtained during the hole-searching process using the three-dimensional reconstruction theory of a binocular stereo vision camera. During the insertion process, 26 contact states of the cross-peg and the cross-hole were classified, and the mapping relationship between the force-moment sensor and relative errors was established based on a backpropagation (BP) neural network, thus completing the task of autonomous PiH assembly. This system avoids hand-guiding, completely realizes the autonomous assembly task from hole-searching to insertion, and can be replaced by other structures of pegs and holes for repeated assembly after obtaining the accurate relative pose between two assembly platforms, which provides a brand-new and unified solution for complex-shaped PiH assembly.

1. Introduction

As the last process of modern manufacturing, mechanical assembly has a direct impact on product quality. In the era of Industry 5.0 [1,2,3], robots are extensively utilized in modern industrial manufacturing fields, such as those of computer, communication and consumer electronics (3C) [4] manufacturing, automobile assembly [5], and reducer installation [6].
However, most robots are limited to position-controlled operations, such as gripping, placing, and painting [7]. They lack the flexibility and perception required for precision work like peg-in-hole (PiH) assembly [8], welding [9,10], and grinding. Additionally, autonomous planning and decision-making abilities are necessary for such tasks. Therefore, force sensors [11,12] are typically installed at the end of a robot to enable real-time motion adjustment based on force-moment information from the sensors. This technology is known as active flexible assembly technology, and the PiH task represents a typical application of this approach.
The PiH task can be divided into two primary processes: hole-searching and insertion. During the hole-searching phase, conventional techniques, such as Archimedean spiral search [13], grid search [14], and random search [15], are commonly employed in blind searches. The blind search method using force sensors may require extensive time and distance during the search phase, potentially exacerbating issues associated with such a technique. As a result, visual servoing is frequently utilized in PiH tasks [16]. Visual servoing can partially address the issue of difficult assembly convergence caused by position errors [17]. However, the introduction of visual servoing increases the amount of information to be processed. Therefore, machine learning is applied in the field of vision-based PiH assembly [18,19].
Chang implemented a dynamic visual servo-based miniature PiH assembly method [20]. The PiH task is the most intricate process in assembly, encompassing several key components such as robotic trajectory planning, peg and hole contact state determination, and pose adjustment. As a result, numerous scholars have integrated machine learning into PiH tasks to emulate human operations through reinforcement learning [21,22], convolutional neural networks [23], piecewise strategies, hierarchical learning [24] and decoupling control [25].
However, the current PiH assembly tasks are predominantly limited to circular peg and hole assembly, with research on shaped-PiH assembly still in its infancy. Song carried out complex-shaped PiH assembly under the teaching mode of a robot [26], while Park experimented with circular, triangular, and square PiH assembly using a manipulator [27]. However, they all used the blind searching method, which is less efficient. Li employed a multi-stage search method to perform PiH assembly with irregular shapes under partial constraints; yet, this approach failed to achieve the precise relative pose of the two assembly platforms and lacks universality [28].
To avoid having too many training samples for neural networks, which can damage the device, it is transformed into a regression problem [29], combining vision and force perception to complete the assembly of 16 kinds of sockets. Ahn completed the PiH assembly task for square, star, and double cylinder shapes through blind search using deep reinforcement learning algorithms. This study used hand-guiding to initially locate the assembling surface, so that the robotic arm only had to perform the insertion task. However, this design overlooks a crucial aspect: the most difficult point of automated PiH assembly is allowing the robotic arm to find the location of the hole with eyes (visual servoing) and to use hands (force sense) like a human to adjust the peg’s angle, position, and speed for smooth insertion into the hole. If human intervention is used to guide the peg near the hole, then the entire assembly system also intervenes in the intelligent body and greatly reduces subsequent insertion difficulties. This assembly strategy goes against the original purpose of automated assembly.
The aim of this paper is to devise a novel and versatile autonomous assembly mechanism for PiH tasks, as obtaining the pose presents a significant challenge in PiH assembly. Once the precise relative pose between the two assembly platforms is obtained, the robotic arm can easily replicate the PiH operation. Due to the ability of a circular peg to rotate around its axis even after assembly, achieving precise positional alignment between the working end and the fixed end of the robot is unattainable. Therefore, it is necessary to design a peg that constrains circumferential rotation.
The new assembly strategy allows for the replacement of pegs and holes of any shape once cross-shaped peg-in-hole assembly has been achieved, without the need to reformulate the assembly strategy or design specialized sensors. Only calibration of the robotic arm and the sensor is required to reproduce the research content presented in this paper and obtain the relative pose between the peg and the hole. This feature allows for the replacement of the shapes and sizes of the pegs and holes while maintaining proper functionality, thereby reducing the workload of users and improving industrial production efficiency.

2. Binocular Stereo Vision Based Hole-Searching

Responding to the current problems in PiH assembly, a novel assembly strategy is proposed that utilizes a binocular camera to determine the center coordinates of the cross-hole [30,31]. This enables the robot to autonomously locate the hole without requiring manual guidance. This autonomous assembly system consists of a 6R robot, a force-moment sensor, a cross-shaped feedback peg, a cross-shaped hole, and a binocular camera, as shown in Figure 1; {J} is the cross-peg coordinate system, {K} is the sensor coordinate system, {S} is the robot base coordinate system, {C1} is the left camera coordinate system, {C2} is the right camera coordinate system, {D} is the cross-hole coordinate system, and {W} is the actual world coordinate system.
The cross-section of the cross-peg and cross-hole is cross-shaped, which effectively constrains the rotation degrees of freedom (DoF) about the x, y, and z axes and the translation DoF about the x and y axes at the same time after completing the assembly. This limits the end joint of a 6R robot to 5 DoF, addressing the issue that the rotation DoF about the z-axis cannot be constrained by a circular peg. Meanwhile, the cross-peg features a tapered end, and the front of the cross-hole is chamfered. The tapered design reduces the requirement for the accuracy of the initial positioning using the binocular camera and solves the problem that the assembly of the circular peg is prone to failures of convergence.
The 6R robot and the binocular camera are fixed on the same platform, with a force-moment sensor and a cross-shaped feedback peg connected in series at the end of the robot, and the cross-hole is fixed on another assembly platform. A binocular stereo vision system was constructed using two monocular industrial cameras, specifically the JAI Go-X model. The binocular vision camera can capture the coordinates of the cross-hole center, and the force-moment sensor can collect the force/moment information during the assembly process, which can be combined with machine learning to train the robot to complete the autonomous PiH task. Based on the required arm reach and the workload of the robot, the ROKAE XB4 series industrial robot was selected as the 6R robot for this research project. This particular model features a payload capacity of 4 kg, an arm reach of 596 mm, and a positioning repeatability of ±0.02 mm.
In this study, the Robot Operating System (ROS) was selected as the development platform due to its capability of achieving 3D visualization and motion simulation. By integrating MATLAB for data processing and backpropagation (BP) neural network training, the ROS system can efficiently accomplish autonomous assembly tasks. The PiH assembly flowchart is shown in Figure 2. This flowchart shows that the autonomous assembly system is divided into two processes: hole-searching and insertion. In the hole-searching process, the binocular camera image processing and the calculation of the center coordinate of the cross-hole are mainly carried out. In the insertion process, the 26 contact states are judged based on force-moment sensor values, and then the pose error is calculated using the BP neural network. This iterative process continues until PiH assembly is completed.

2.1. Feature Extraction for Cross-Peg

To calculate the pose of the cross-hole coordinate system relative to the robot base coordinate system {S}, it is necessary to perform image processing first in order to identify the cross-hole features. Then, based on the principle of 3D reconstruction by binocular vision, the spatial coordinates of the center of the cross-hole are obtained. The image of the cross-hole captured by the binocular stereo vision camera is shown in Figure 3a. After obtaining the image, the method of gray-scale classification [32,33] is used to extract features of the image. Figure 3b displays the image after undergoing gray-scale processing.
The grayscale image’s pixels are binarized into black and white based on a predetermined threshold value t. The pixel type is set to 0 when the grayscale value of the pixel is less than t; the pixel type is set to 1 when the grayscale value of the pixel is greater than t:
c u , v = 0 I u , v < t 1 I u , v t u , v I
where c is the pixel type, represented by an integer, I is the grayscale value of each pixel point in the grayscale image, in the range 0 to 1, t is the threshold value, and after binarization, the value of 1 represents the background and is displayed in white, while the value of 0 represents the target after binarization process and is displayed in black.
Figure 3c shows the image after the binarization process. Since the cross-hole is a through hole, the cross-hole is shown as a white cross shape surrounded by black annulus in the binarized image.

2.2. Calculation of Three-Dimensional Center Coordinates of the Cross-Hole

The monocular camera solely captures the planar information of an object while losing its depth perception. Therefore, a parallel binocular stereo vision camera is employed to simultaneously gather information from two views and the parallax ranging method is utilized for 3D reconstruction [34] of spatial points, ultimately obtaining the 3D coordinates of the cross-hole’s center [35].
As illustrated in Figure 4, the operational principle of binocular stereo vision [36] involves two camera coordinate systems: {C1} for the left camera with O C 1 X C 1 Y C 1 Z C 1 , and {C2} for the right camera with O C 2 X C 2 Y C 2 Z C 2 . Both cameras have a focal length f and parallel optical axes separated by a fixed distance d. The binocular cameras utilized in this study have a focal length of 381.2612 mm.
Point P is projected onto the left camera at P 1 ( u 1 ,   v 1 ) and onto the right camera at P 2 ( u 2 ,   v 2 ) , then the point P is located at the intersection of the line O C 1 P 1 and the line O C 2 P 2 . The coordinates of the point P in the coordinate system {C1} are ( x C 1 ,   y C 1 ,   z C 1 ) and in the coordinate system {C2} are ( x C 2 ,   y C 2 ,   z C 2 ) ; then, the relationship between the two coordinate systems {C1} and {C2} is
x C 1 = x C 2 + d y C 1 = y C 2 z C 1 = z C 2
The coordinates of imaging points P 1 and P 2 in the image coordinate system ( x 1 ,   y 1 ) and ( x 2 ,   y 2 ) can be obtained according to Equation (2), and the relationship between the image coordinate system and the camera coordinate system can be determined by applying the principle of similar triangles:
f z C 1 = x 1 x C 1 = y 1 y C 1 f z C 2 = x 2 x C 2 = y 2 y C 2
Simultaneously with Equations (2) and (3), the spatial coordinate ( x C 1 ,   y C 1 ,   z C 1 ) of the point P in the left camera coordinate system {C1} is obtained as
x C 1 = x 1 f z C 1 = x 1 x 1 x 2 d y C 1 = y 1 f z c 1 = y 1 y 1 y 2 d z C 1 = f d x 1 x 2
A schematic diagram of the coordinate and relative pose of the binocular vision system is depicted in Figure 5. The initial pose T 1 of the cross-hole coordinate system can be derived from the poses T C 1 S and T D C 1 , which respectively represent the left camera coordinate system {C1}, with respect to the base coordinate system {S}, and the cross-hole coordinate system {D}, with respect to the left camera coordinate system {C1}.
The poses T C 1 S of the left camera coordinate system {C1} with respect to the robot base coordinate system {S} are
T C 1 S = 0 0 1 p x 1 0 0 p y 0 1 0 p z 0 0 0 1
where p x , p y , p z is the position of the camera coordinate system {C1} relative to the base coordinate system {S}.
The pose T D C 1 of the cross hole coordinate system {D} with respect to the left camera coordinate system {C1} is
T D C 1 = 0 1 0 x C 1 1 0 0 y C 1 0 0 1 z C 1 0 0 0 1
where x C 1 , y C 1 , z C 1 is the position of the cross-hole center in the left camera coordinate system {C1}.
Then the pose T 1 of the coordinate system {D} under the coordinate system {S} of the robot arm base is
T 1 = T C 1 S T D C 1 = 0 0 1 p x + z C 1 0 1 0 p y x C 1 1 0 0 p z y C 1 0 0 0 1
Using the 3D reconstruction principle of binocular vision, the three-dimensional coordinates ( x C 1 ,   y C 1 ,   z C 1 ) of the center of the cross-hole in the left camera coordinate system {C1} are calculated as (0.0507, 0.0085, 0.3002), in meters. The transformation matrix T C 1 S of the left camera coordinate system {C1} with respect to the base coordinate system {S} can be determined as
T C 1 S = 0 0 1 0.1310 1 0 0 0.2406 0 1 0 0.2225 0 0 0 1
Substituting (8) into (7), the target pose T 1 of the cross-peg is obtained as
T 1 = 0 0 1 0.4312 1 0 0 0.2913 0 1 0 0.2140 0 0 0 1
The images captured by the binocular camera in the simulation system are presented in Figure 6a, while Figure 6b displays the image obtained after image processing. The area enclosed by the red rectangle represents the cross-hole region, with its center marked by a red dot. In the left camera, the pixel coordinates of this center are (385, 253), while in the right camera they are (321.5, 252.5). The distance between both cameras’ optical axes is 0.05 m.

2.3. 6R Robot Trajectory Planning

To achieve the initial positioning and movement of the PiH task, it is necessary to plan the trajectory of the 6R robot so that it can move to position T 1 . To ensure the smoothness of velocity and acceleration, a fifth-order polynomial is employed for interpolation [37], with q t representing the joint position of the robot at time t.
The position, velocity and acceleration of the interpolation starting point A are q 0 , q ˙ 0 , and q ¨ 0 , respectively, and the position, velocity, and acceleration of the interpolation target destination B are q T , q ˙ T , and q ¨ T , respectively. The time duration between point A and point B is denoted as T, which commences from the initial position A at   t = 0 and terminates at the destination B at t = T. Consequently, we can derive
q 0 = a 0 q ˙ 0 = a 1 q ¨ 0 = 2 a 2 q T = a 0 + a 1 T + a 2 T 2 + a 3 T 3 + a 4 T 4 + a 5 T 5 q ˙ T = a 1 + 2 a 2 T + 3 a 3 T 2 + 4 a 4 T 3 + 5 a 5 T 4 q ¨ T = 2 a 2 + 6 a 3 T + 12 a 4 T 4 + 20 a 5 T 3
The solution gives
a 0 = q 0 a 1 = q ˙ 0 a 2 = 1 2 q ¨ 0 a 3 = 1 2 T 3 20 q T q 0 8 q ˙ T + 12 q ˙ 0 T + q ¨ T 3 q ¨ 0 T 2 a 4 = 1 2 T 4 30 q T q 0 + 14 q ˙ T + 16 q ˙ 0 T 2 q ¨ T 3 q ¨ 0 T 2 a 5 = 1 2 T 5 12 q T q 0 6 q ˙ T + 6 q ˙ 0 T + q ¨ T q ¨ 0 T 2
The coefficients can be obtained by inputting the initial joint angle and target joint angle into Equation (11), which are then substituted in q t to determine the joint angle of the robot arm at each time step. This information is transmitted to the robot for precise control over its movement, allowing it to reach the specified position T 1 and complete initial positioning of the PiH task.
The target joint angle q T is
q T = 0.7143 ,   1.1622 , 0.0759 ,   0.7751 , 1.2112 , 1.9028
Figure 7a shows the initial pose of the 6R robot. As measured by ROS, in the initial state, the pose relationship between the cross-hole coordinate system {D} and the robot base coordinate system {S} is T 0 , which is recorded for subsequent verification:
T 0 = 0.07 0.06 1 0.4 0.06 1 0.06 0.3 1 0.06 0.07 0.2 0 0 0 1
After the interpolation trajectory planning of the 6R robot, Figure 7b depicts the robot’s attitude subsequent to its relocation to the designated position T 1 . At this point, the axes of both cross-peg and cross-hole are collinear, but do not make contact. This marks the completion of the initial pose adjustment for the PiH task.

3. Force-Moment Sensor-Based Cross-Peg Insertion

After completing the initial positioning through a hole-searching process, the cross-peg can be inserted into the cross-hole by continuously adjusting its position according to the force-moment sensor value until the PiH task is completed. In this section, the force-moment sensor will be calibrated and gravity compensation will be implemented to mitigate the impact of inherent errors and external loads on the sensor’s measurements. Subsequently, the contact state of the peg and hole is classified and the mechanics of the assembly process are analyzed. Finally, a BP neural network mapping relationship between sensor data and assembly error is established to enable the independent assembly of the cross-peg.

3.1. Force-Moment Sensor Calibration and Gravity Compensation

The measured data of the force-moment sensor consists of three components: the inherent error of the sensor, the gravity acting on both the sensor and peg, and the external contact force applied to the feedback peg [38,39]. The gravity of the sensor and peg is denoted as G, and has its centroid located at ( x K ,   y K ,   z K ) in coordinate system {D}. Based on the material and structural design, the gravity acting upon the force-moment sensor and cross-peg is 9.558 N.
The user has the flexibility to adjust the data in accordance with their own study. The value of the force-moment sensor at no load F 0 , the measured value of the force-moment sensor F s , and the contact force of the cross-peg F e can be recorded as
F 0 = F x 0 ,   F y 0 ,   F z 0 ,   M x 0 ,   M y 0 ,   M z 0 F s = F x ,   F y ,   F z ,   M x ,   M y ,   M z F e = F e x ,   F e y ,   F e z ,   M e x ,   M e y ,   M e z
According to the theory of least squares [40], it is obtained that
m = M x M y M z = 0 F z F y 1 0 0 F z 0 F x 0 1 0 F y F x 0 0 0 1 x K y K z K k 1 k 2 k 3 = F p
where k 1 , k 2 , k 3 are the constants containing the force under no load, moment, and centroid coordinates.
To solve the matrix using least squares theory, it is necessary to guide the end joint of the robot through six distinct poses while ensuring that at least three of these poses have non-coplanar end-pointing vectors. Additionally, force-moment sensor data must be recorded for each pose. Table 1 shows the force-moment sensor data in six different poses.
The least squares solution p of the matrix from the force-moment sensor data is
p = F T F 1 F T m   = x K ,   y K ,   z K ,   k 1 ,   k 2 ,   k 3 T   = 0.0006 ,   0.0011 ,   0.0327 , 0.0034 ,   0.0081 , 0.0065 T
Similarly, the results of the force-moment sensor calibrated by the least squares method are as follows:
F 0 = F x 0 ,   F y 0 ,   F z 0 ,   M x 0 ,   M y 0 ,   M z 0   = 0.1714 , 0.0426 , 0.0153 , 0.0020 ,   0.0138 , 0.0067

3.2. Peg-in-Hole Contact State Classification

The contact point of the PiH task is located at the intersection of the tapered end of the front guide section of the cross-peg and the edge line of the cross-hole, as shown in Figure 8.
Based on the advantages of the central symmetry cross-shaped peg and hole, two classification methods are proposed for categorizing contact states in the PiH insertion task: classification by the number of contact points and the rotation error of the cross-peg.

3.2.1. Classification by the Number of Contact Points

As shown in Figure 9, during the insertion of a cross-shaped peg, there are four types of contact states according to the number of contact points: one-point contact, two-point contact, three-point contact, and four-point contact.
In Figure 9, it can be seen that the cross-peg circumferential forces and moments are more complex when the guide area of the cross-peg is in contact with the cross-hole, while the radial forces F e z are simpler and may be related to the number of contact points. To verify this hypothesis, the robot was manipulated to create 100 contacts between the cross-axis and cross-hole in one-point, two-point, three-point, and four-point states. The unit vector of the external force on the cross-peg is as follows:
n 0 = n x 0 n y 0 n z 0 = 1 F e x 2 + F e y 2 + F e z 2 F e x F e y F e z
where the data of n z 0 are shown in Figure 10.
It can be seen in Figure 10 that n z 0 has a clear mapping relationship with the contact state; it is concluded that n z 0 is distributed between 0.23 and 0.31 with a mean value of 0.278 in the case of one-point contact, between 0.33 and 0.42 with a mean value of 0.376 in the case of two-point contact, between 0.58 and 0.71 with a mean value of 0.649 in the case of three-point contact, and is infinitely close to 1 in the case of four-point contact. According to the mapping relationship between n z 0 and the contact state, the contact state of the peg can be determined for subsequent inserting operations.

3.2.2. Classification by Rotation Error

The rotation angle of the cross-hole coordinate system {D} relative to the Z J axis of the cross-peg coordinate system {D} is α. Depending on the direction of rotation, it can be divided into the cases of α < 0 and α > 0 , and Figure 11 shows the diagrams of the two different contact states.

3.3. Force-Moment Sensor for Each Contact State

The two states can be distinguished according to the direction of M e z in the force-moment sensor data; when α < 0 , M z is negative, and when α > 0 , M e z is positive. By combining the two classification methods mentioned in Section 3.2.1 and Section 3.2.2, a total of 26 different contact cases were obtained. To succinctly describe each contact state, a numbering system is utilized to express the contact cases. The format for this numbering system is as follows:
w 1 w 2 w 3
where w 1 indicates the number of contact points, ranging from 1 to 4; w 2 indicates the direction of rotation— w 2 = 1 when α < 0 and w 2 = 2 when α > 0 ; w 3 indicates the contact case when w 1 and w 2 are identical. For example, case 1-2-3 represents the third contact case with one-point contact and α > 0 .

3.3.1. One-Point Contact

There are eight one-point contact cases, as shown in Figure 12. The force-moment sensor data for the one-point contact case is presented in Table 2.

3.3.2. Two-Point Contact

There are eight two-point contact cases, as shown in Figure 13. The force-moment sensor data for the two-point contact case is presented in Table 3.

3.3.3. Three-Point Contact

There are eight three-point contact cases, as shown in Figure 14. The force-moment sensor data for the three-point contact case is presented in Table 4.

3.3.4. Four-Point Contact

There are two four-point contact cases, as shown in Figure 15. The force-moment sensor data for the four-point contact case is presented in Table 5.

3.4. Mechanical Analysis of PiH Assembly

Figure 16 illustrates the force exerted when the cross-peg and cross-hole make contact with case 1-1-1. The contact point of the PiH task is located at the intersection of line N 1 M 1 and line D 1 D 2 . The direction of the contact force F on the cross-peg is a normal vector of the plane formed by two lines, pointing to the inside of the cross-hole. The direction of the friction force μ F is along the tapered end of the cross-peg from point M 1 to point N 1 .
The coordinates of the point N 1 are n 1 ,   n 2 ,   n 3 , and the coordinates of the point M 1 are m 1 ,   m 2 ,   m 3 . The coordinates of point D 1 and point D 2 in the cross-peg coordinate system {D} are x 1 ,   y 1 ,   z 1 and x 2 ,   y 2 ,   z 2 , respectively. Then the coordinates of the contact point B 1 is
B 1 = B 1 x B 1 y B 1 z = n 2 n 1 m 2 m 1 x 2 x 1 y 2 y 1 m 1 + n 1 y 1 x 2 x 1 y 2 y 1 + x 1 n 2 n 1 m 2 m 1 x 2 x 1 y 2 y 1 1 B 1 x x 1 y 2 y 1 x 2 x 1 + y 1 B 1 x x 1 z 2 z 1 x 2 x 1 + z 1
The coordinates of each contact point in other contact cases can be solved using the same method, which will not be repeated in this paper.
The cross-peg is subjected to a contact force F 1 and a frictional force f 1 at the contact point B 1 . The direction of the contact force F 1 is normal to the plane formed by line N 1 M 1 and line D 1 D 2 , pointing inside the cross-peg; the direction of the frictional force f 1 is along line N 1 M 1 , opposite to the direction of movement of the cross-peg.
The direction vector of the contact force F 1 is
n 1 = n 1 x n 1 y n 1 z = M 1 N 1 × D 1 D 2 M 1 N 1 × D 1 D 2
Assuming the magnitude of the contact force on the cross-peg at the contact point B 1 is F 1 , then
F 1 = F 1 x F 1 y F 1 z = F 1 n 1 x F 1 n 1 y F 1 n 1 z = F 1 n 1
Since the direction of the frictional force n f 1 is
n f 1 = n f 1 x n f 1 y n f 1 z = M 1 N 1 M 1 N 1
Then the frictional force f 1 is
f 1 = f 1 x f 1 y f 1 z = μ F 1 n f 1 x μ F 1 n f 1 y μ F 1 n f 1 z = μ F 1 n f 1
Additionally, the force-moment sensor value on the cross-peg is then
F e x = i = 1 4 F i x + f i x F e y = i = 1 4 F i y + f i y F e z = i = 1 4 F i z + f i z M e x = i = 1 4 F y B i z + F z B i y M e y = i = 1 4 F x B i z F z B i x M e z = i = 1 4 F x B i y + F y B i x
where i represents the contact point number for each contact state.
The relative error α ,   β ,   γ ,   x ,   y ,   z between the peg and hole can be obtained by solving Equation (24) and then adjusting the peg pose according to the error.

4. Automatic PiH Assembly Based on BP Neural Network

4.1. Establishment of BP Neural Network

Due to the numerous unknowns and lengthy solution time of Equation (24), a mapping relationship between force-moment sensors and angle errors was established based on BP neural networks in order to achieve automatic assembly. [41]. This enables the robot to adjust its pose directly by obtaining the angular error from sensor data when the peg and hole are in contact, thereby realizing PiH tasks automatically.
The input layer consists of three force components and three moment components, which are measured by the force-moment sensor. The main purpose of establishing the BP neural network is to predict the angular error between the peg and the hole through force-moment sensor data, so the output layer consists of three angle error α , β , γ in three directions. The BP neural network algorithm places high demand on the network structure. Although increasing the number of network layers can reduce training errors, it also makes the BP neural network more complicated and increases training time. For the problem of mapping the relationship between sensor data and angle error, the BP neural networks with only one implicit layer can solve the problem, and the error can be reduced by increasing the number of neurons in the implicit layer. Therefore, a three-layer BP neural network model with only one hidden layer was used for this study.
The transfer function plays a crucial role in determining the error rate, computational complexity, and training efficiency of the BP neural network model. In this study, the evaluation index values were standardized to a range of 0 to 1, thus necessitating the selection of the Sigmoid function as the transfer function for the hidden layer. Due to the limited number of samples used in the PiH assembly process, the accuracy of the output results is impacted by the traditional BP neural network’s processing of a restricted sample size. Traingdx is specifically associated with the Matlab 2022b Neural Network Toolbox, a software package for designing, implementing, and simulating neural networks. Therefore, the traingdx function is used to shorten the time required to reach the specified accuracy to a certain extent, avoid the oscillation phenomenon in the training process, and improve the convergence speed and learning accuracy [42]. The BP neural network model established in this paper based on the above analysis is illustrated in Figure 17.

4.2. BP Neural Network Training Process

The BP neural network presented in this paper utilizes force-moment sensor data as the input and predicts angular error as the output. The sample data required for training are obtained through ROS, where the pose of the cross-peg is adjusted, and force-moment sensor data and angle error data are collected as samples for training in 26 states. This mapping relationship between sensor information and angle error allows the robot to autonomously determine its state and execute PiH tasks. Table 6 and Table 7 display a portion of the training samples.
Due to the significant difference in size between the training data, normalization is necessary to facilitate effective training:
F i = X i X min X i max X i min
where X i is the No. i sample data of the indicator obtained, F i is standardized value of X i , X i max is the maximum value in the No. i sample data of the indicator obtained, and X i min is the smallest value in the No. i sample data of the indicator obtained.
Through continuous testing and evaluation, it was determined that the optimal performance is achieved when there are 150 neurons in the hidden layer. The logsig function is used for the input layer transfer function, while the pureline function is used for the hidden layer transfer function. These functions are part of the Matlab packages. The traingdx function is employed as the training function, with mean square error (MSE) serving as the performance function. The target error is 0.001, and the maximum number of iterations for learning is set to 1000.
Figure 18 shows the validation results after training, showing the sample values and test values, as well as the errors between them for each angle error. The errors are within 1°, which meet the requirements of PiH assembly [43]. Using this mapping model, the angular error can be swiftly and accurately obtained and substituted into Equation (24), while the translation error can be calculated in a short time.

5. Experiments

After completing the calibration of the force-moment sensor and BP neural network training, the assembly process of the cross-peg and cross-hole can be executed; the changes in the contact force and moment between the peg and the hole during the assembly process are shown in Figure 19.
At approximately 0 to 5 s, the cross-peg begins to move along its axis without any contact with the cross-hole, as shown in Figure 20a. The force-moment sensor reading value remains at zero. At around 6 s, the cross-peg continues to approach and insert into the cross-hole, resulting in an abrupt change in sensor value that indicates initial contact between the peg and the hole. The contact force value is (−19.247, −0.785, −5.3953) N, while the moment value is (−0.044, −1.244, 0.339) Nm.
Using the established BP neural network, the contact state is judged to be 1-2-2, and the mapping relationship between force-moment sensor values and angle errors is obtained as (0.057, 0.074, 0.076) rad in the cross-hole coordinate system. The corresponding force-moment sensor values and angle errors are (−0.011, 0.006, 0.065) m. According to the calculated error, the robot pose is adjusted, the contact force between the peg and the hole is restored to zero after the adjustment, and the cross-peg continues to move along its axis. Similar to the above process, the second contact occurs around 13 s, as shown in Figure 20b, after automatic pose adjustment; the third contact occurs around 24 s, as shown in Figure 20c, and the assembly is completed after 27 s.
After the third adjustment is completed, the force-moment sensor data is close to 0 due to the frictional force existing in the peg. The increase in cross-peg insertion depth no longer generates abrupt changes, indicating that the relative error between the peg and the hole has been adjusted within acceptable accuracy limits. The state of the system after completion of PiH assembly is shown in Figure 21, wherein the cross-peg coordinate system {D} coincides with the cross-hole coordinate system {D}, and the position of each joint angle of the robot arm is
θ = 0.7692 ,   1.1936 , 0.1269 ,   0.8692 , 1.3075 , 0.4335
The joint angle is substituted into the transformation matrix of the robot link to obtain the relative pose T 2 of the cross-hole coordinate system {D} relative to the robot base coordinate system {S}, which is
T 2 = 0.0690 0.0745 0.9948 0.4365 0.0594 0.9957 0.0704 0.3139 0.9959 0.0542 0.0731 0.2042 0 0 0 1
The relative pose obtained by the autonomous assembly system exhibits a negligible error relative to the pose measured by ROS in the initial state as shown in Equation (13), thereby validating the accuracy and correctness of the autonomous assembly system.

6. Conclusions

Based on the limitations of current peg-in-hole assembly, which mostly involves circular pegs and holes, and the fact that rotational DoF cannot be restricted about their axes after circular hole assembly, resulting in difficulty in obtaining the precise relative pose between the robot base and the hole, this paper proposes a new PiH assembly strategy by designing a cross-shaped peg and hole.
As many studies have focused solely on the insertion of pegs in PiH tasks, this paper aimed to establish a comprehensive autonomous assembly system that includes hole-searching and insertion based on force/moment and visual information. Therefore, the central coordinates of the cross-hole were obtained based on the three-dimensional reconstruction theory of binocular stereo vision. The trajectory for the robot was planned to align the axis of the cross-peg with the cross-hole without physical contact, thus achieving an autonomous hole-searching process. Then, all 26 contact states of cross-peg and cross-hole were classified and numbered according to the number of contact points and angle errors between the peg and the hole, and the force-moment sensor value under different contact states was summarized; then, the BP neural network was trained based on the sensor information and the relative errors of the cross peg and cross holes to establish the mapping relationship between force and error. Then, the cross-peg and cross-hole contact states were classified and numbered based on the number of contact points and angle errors between them. The force-moment sensor values under different contact states were summarized, and a BP neural network was trained using this information with relative errors to establish a mapping relationship between force and error. Finally, the PiH task was successfully executed to address the autonomous assembly challenge of the cross-peg and cross-hole, thereby obtaining the relative pose between the robot base coordinate system and the cross-hole coordinate system.
This new assembly strategy can replace the use of cross-pegs and cross-holes once this pose is obtained, enabling the assembly of more complex-shaped PiH tasks, such as spline assembly and square peg assembly. This solves the problem in which users need to reformulate the assembly strategy or design special sensors according to the size and shape of the peg. This not only reduces users’ workload but also enhances industrial production efficiency.
However, the research on this topic still needs to be improved, and our future research directions are as follows:
  • Firstly, to compare the assembly efficiency and convergence speed of the mapping model under different algorithms, such as reinforcement learning and iterative learning control;
  • Secondly, the replacement of various shaped pegs and holes to test their assembly effects was not conducted in this article. However, future research will focus on this area;
  • Finally, if the 6R robot can be fixed on a 6-DoF platform, after obtaining the relative pose of the robot base coordinate system and the cross-hole coordinate system, the platform can achieve rendezvous and docking with another platform while continuously detecting the docking process based on the follow-up function of the 6R robot. After the system is built, it can realize the assembly task of heavy machinery, such as rocket tank assembly and missile assembly.

Author Contributions

Conceptualization, Z.M.; methodology, Z.M.; software, Z.M. and Y.Z.; validation, Z.M.; formal analysis, Z.M. and X.H.; investigation, Z.M.; resources, Z.M. and Y.Z.; data curation, Z.M.; writing—original draft preparation, Z.M. and X.H.; writing—review and editing, Z.M.; visualization, Z.M. and Y.Z.; supervision, Z.M.; project administration, Z.M. and X.H.; funding acquisition, Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Yang, J.; Liu, Y.; Morgan, P.L. Human–machine interaction towards Industry 5.0: Human-centric smart manufacturing. Digit. Eng. 2024, 2, 100013. [Google Scholar] [CrossRef]
  2. Shiyan, L.; Pengyue, L.; Jinfeng, W.; Peng, L. Toward industry 5.0: Challenges and enablers of intelligent manufacturing technology implementation under the perspective of sustainability. Heliyon 2024, 10, e35162. [Google Scholar] [CrossRef]
  3. Selvarajan, S.; Manoharan, H.; Shankar, A. SL-RI: Integration of supervised learning in robots for industry 5.0 automated application monitoring. Meas. Sens. 2024, 31, 100972. [Google Scholar] [CrossRef]
  4. Fang, S.; Huang, X.; Chen, H.; Xi, N. Dual-Arm Robot Assembly System for 3C Product Based on Vision Guidance. In Proceedings of the IEEE International Conference on Robotics and Biomimetics (IEEE ROBIO), Qingdao, China, 3–7 December 2016. [Google Scholar]
  5. Jokesch, M.; Suchy, J.; Winkler, A.; Fross, A.; Thomas, U. Generic Algorithm for Peg-In-Hole Assembly Tasks for Pin Alignments with Impedance Controlled Robots. In Proceedings of the 2nd Iberian Robotics Conference (ROBOT), Lisbon, Portugal, 19–21 November 2016. [Google Scholar]
  6. Bing-Long, W.U.; Dao-Kui, Q.U.; Fang, X.U. Industrial robot high precision peg-in-hole assembly based on hybrid force/position control. J. Zhejiang Univ. (Eng. Sci.) 2018, 52, 379–386. [Google Scholar]
  7. Jiang, J.; Huang, Z.; Bi, Z.; Ma, X.; Yu, G. State-of-the-Art control strategies for robotic PiH assembly—ScienceDirect. Robot. Comput.-Integr. Manuf. 2022, 65, 101894. [Google Scholar] [CrossRef]
  8. Pitchandi, N.; Subramanian, S.P.; Irulappan, M. Insertion force analysis of compliantly supported peg-in-hole assembly. Assem. Autom. 2017, 37, 285–295. [Google Scholar] [CrossRef]
  9. Chi, P.; Wang, Z.; Liao, H.; Li, T.; Wu, X.; Zhang, Q. Towards new-generation of intelligent welding manufacturing: A systematic review on 3D vision measurement and path planning of humanoid welding robots. Measurement 2025, 242, 116065. [Google Scholar] [CrossRef]
  10. Guo, Q.; Yang, Z.; Xu, J.; Jiang, Y.; Wang, W.; Liu, Z.; Zhao, W.; Sun, Y. Progress, challenges and trends on vision sensing technologies in automatic/intelligent robotic welding: State-of-the-art review. Robot. Comput.-Integr. Manuf. 2024, 89, 102767. [Google Scholar] [CrossRef]
  11. Whitney, D.E. Quasi-Static Assembly of Compliantly Supported Rigid Parts. J. Dyn. Syst. Meas. Control 1982, 104, 65–77. [Google Scholar] [CrossRef]
  12. Bin, W.; Jiwen, Z.; Dan, W. Force–vision fusion fuzzy control for robotic batch precision assembly of flexibly absorbed pegs. Robot. Comput.-Integr. Manuf. 2025, 92, 102861. [Google Scholar] [CrossRef]
  13. Park, H.; Park, J.; Lee, D.-H.; Park, J.-H.; Baeg, M.-H.; Bae, J.-H. Compliance-Based Robotic Peg-in-Hole Assembly Strategy Without Force Feedback. IEEE Trans. Ind. Electron. 2017, 64, 6299–6309. [Google Scholar] [CrossRef]
  14. Chhatpar, S.R.; Branicky, M.S. Search strategies for peg-in-hole assemblies with position uncertainty. In Proceedings of the IEEE Conference on Intelligent Robots and Systems (IROS 2001), Maui, HI, USA, 29 October–3 November 2001. [Google Scholar]
  15. Marvel, J.A.; Bostelman, R.; Falco, J. Multi-Robot Assembly Strategies and Metrics. ACM Comput. Surv. 2018, 51, 14. [Google Scholar] [CrossRef] [PubMed]
  16. Ahn, K.-H.; Na, M.; Song, J.-B. Robotic assembly strategy via reinforcement learning based on force and visual information. Robot. Auton. Syst. 2023, 164, 104399. [Google Scholar] [CrossRef]
  17. Nigro, M.; Sileo, M.; Pierri, F.; Genovese, K.; Bloisi, D.D.; Caccavale, F. Peg-in-Hole Using 3D Workpiece Reconstruction and CNN-Based Hole Detection. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October 2020–24 January 2021; pp. 4235–4240. [Google Scholar]
  18. Vecerik, M.; Sushkov, O.; Barker, D.; Rothorl, T.; Hester, T.; Scholz, J. A Practical Approach to Insertion with Variable Socket Position Using Deep Reinforcement Learning. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019. [Google Scholar]
  19. Schoettler, G.; Nair, A.; Luo, J.; Bahl, S.; Ojea, J.A.; Solowjow, E.; Levine, S. Deep Reinforcement Learning for Industrial Insertion Tasks with Visual Inputs and Natural Rewards. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Electr Network, Las Vegas, NV, USA, 24 October 2020–24 January 2021. [Google Scholar]
  20. Chang, R.J.; Lin, C.Y.; Lin, P.S. Visual-Based Automation of Peg-in-Hole Microassembly Process. J. Manuf. Sci. Eng. 2011, 133, 041015. [Google Scholar] [CrossRef]
  21. Li, J.; Pang, D.; Zheng, Y.; Guan, X.; Le, X. A flexible manufacturing assembly system with deep reinforcement learning. Control Eng. Pract. 2022, 118, 104957. [Google Scholar] [CrossRef]
  22. Gai, Y.; Zhang, J.; Wu, D.; Chen, K. Local connection reinforcement learning method for efficient robotic peg-in-hole assembly. Eng. Appl. Artif. Intell. 2024, 133, 108520. [Google Scholar] [CrossRef]
  23. Zhang, T.; Liang, X.; Zou, Y. Robot peg-in-hole assembly based on contact force estimation compensated by convolutional neural network. Control Eng. Pract. 2022, 120, 105012. [Google Scholar] [CrossRef]
  24. Simonič, M.; Ude, A.; Nemec, B. Hierarchical learning of robotic contact policies. Robot. Comput.-Integr. Manuf. 2024, 86, 102657. [Google Scholar] [CrossRef]
  25. Gai, Y.; Wang, B.; Zhang, J.; Wu, D.; Chen, K. Piecewise strategy and decoupling control method for high pose precision robotic peg-in-hole assembly. Robot. Comput.-Integr. Manuf. 2023, 79, 102451. [Google Scholar] [CrossRef]
  26. Song, H.-C.; Kim, Y.-L.; Song, J.-B. Guidance algorithm for complex-shape peg-in-hole strategy based on geometrical information and force control. Adv. Robot. 2016, 30, 552–563. [Google Scholar] [CrossRef]
  27. Park, H.; Park, J.; Lee, D.H.; Park, J.H.; Bae, J.H. Compliant Peg-in-Hole Assembly Using Partial Spiral Force Trajectory with Tilted Peg Posture. IEEE Robot. Autom. Lett. 2020, 5, 4447–4454. [Google Scholar] [CrossRef]
  28. Li, W.; Cheng, H.; Li, C.; Zhang, X. Robotic Assembly for Irregular Shaped Peg-in-Hole with Partial Constraints. Appl. Sci. 2021, 11, 7394. [Google Scholar] [CrossRef]
  29. Spector, O.; Castro, D.D. InsertionNet—A Scalable Solution for Insertion. IEEE Robot. Autom. Lett. 2021, 6, 5509–5516. [Google Scholar] [CrossRef]
  30. Zhao, C.; Fan, C.; Zhao, Z. A binocular camera calibration method based on circle detection. Heliyon 2024, 10, e38347. [Google Scholar] [CrossRef] [PubMed]
  31. Wang, H.; Chen, C.; Liu, Y.; Ren, B.; Zhang, Y.; Zhao, X.; Chi, Y. A novel approach for robotic welding trajectory recognition based on pseudo-binocular stereo vision. Opt. Laser Technol. 2024, 174, 110669. [Google Scholar] [CrossRef]
  32. Ojala, T.; Pietikainen, M.; Maenpaa, T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef]
  33. Das, D.; Naskar, R. Image splicing detection using low-dimensional feature vector of texture features and Haralick features based on Gray Level Co-occurrence Matrix. Signal Process. Image Commun. 2024, 125, 117134. [Google Scholar] [CrossRef]
  34. Ma, K.; Zhou, H.; Li, J.; Liu, H. Design of Binocular Stereo Vision System with Parallel Optical Axesand Image 3D Reconstruction. In Proceedings of the 2019 China-Qatar International Workshop on Artificial Intelligence and Applications to Intelligent Manufacturing (AIAIM), Doha, Qatar, 1–4 January 2019; pp. 59–62. [Google Scholar]
  35. Zhu, J.; Zeng, Q.; Han, F.; Jia, C.; Bian, Y.; Wei, C. Design of laser scanning binocular stereo vision imaging system and target measurement. Optik 2022, 270, 169994. [Google Scholar] [CrossRef]
  36. Feng, G.; Liu, Y.; Shi, W.; Miao, Y. Binocular camera-based visual localization with optimized keypoint selection and multi-epipolar constraints. J. King Saud Univ.-Comput. Inf. Sci. 2024, 36, 102228. [Google Scholar] [CrossRef]
  37. Biagiotti, L.; Melchiorri, C. Trajectory Planning for Automatic Machines and Robots; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  38. Yu, Y.; Shi, R.; Lou, Y. Bias Estimation and Gravity Compensation for Wrist-Mounted Force/Torque Sensor. IEEE Sens. J. 2022, 22, 17625–17634. [Google Scholar] [CrossRef]
  39. Li, H.-M.; Liu, C.-K.; Yang, Y.-C.; Tsai, M.-S. Simultaneous compensation of geometric and compliance errors for robotics with consideration of variable payload effects. Mechatronics 2024, 102, 103228. [Google Scholar] [CrossRef]
  40. Niu, J.; Xu, M.; Lin, Y.; Xue, Q. Numerical solution of nonlinear singular boundary value problems. J. Comput. Appl. Math. 2018, 331, 42–51. [Google Scholar] [CrossRef]
  41. Liang, Q.; Wu, W.; Coppola, G.; Zhang, D.; Sun, W.; Ge, Y.; Wang, Y. Calibration and decoupling of multi-axis robotic Force/Moment sensors. Robot. Comput.-Integr. Manuf. 2018, 49, 301–308. [Google Scholar] [CrossRef]
  42. Abidi, K.; Xu, J.X. Iterative Learning Control for Sampled-Data Systems: From Theory to Practice. IEEE Trans. Ind. Electron. 2011, 58, 3002–3015. [Google Scholar] [CrossRef]
  43. Du, K.-L.; Zhang, B.-B.; Huang, X.; Hu, J. Dynamic analysis of assembly process with passive compliance for robot manipulators. In Proceedings of the 2003 IEEE International Symposium on Computational Intelligence in Robotics and Automation. Computational Intelligence in Robotics and Automation for the New Millennium (Cat. No.03EX694), Kobe, Japan, 16–20 July 2003; Volume 1163, pp. 1168–1173. [Google Scholar]
Figure 1. Autonomous assembly system diagram.
Figure 1. Autonomous assembly system diagram.
Machines 12 00846 g001
Figure 2. Peg-in-hole assembly flowchart.
Figure 2. Peg-in-hole assembly flowchart.
Machines 12 00846 g002
Figure 3. Image processing. (a) Camera shooting in ROS; (b) Gray-scale image; (c) Binarization processing image.
Figure 3. Image processing. (a) Camera shooting in ROS; (b) Gray-scale image; (c) Binarization processing image.
Machines 12 00846 g003
Figure 4. Parallel binocular vision schematic.
Figure 4. Parallel binocular vision schematic.
Machines 12 00846 g004
Figure 5. Relative pose of the binocular vision system.
Figure 5. Relative pose of the binocular vision system.
Machines 12 00846 g005
Figure 6. Camera shooting and cross-peg center: (a) Left camera shooting; (b) Right camera shooting; (c) Cross-peg center of left camera shooting; (d) Cross-peg center of right camera shooting.
Figure 6. Camera shooting and cross-peg center: (a) Left camera shooting; (b) Right camera shooting; (c) Cross-peg center of left camera shooting; (d) Cross-peg center of right camera shooting.
Machines 12 00846 g006
Figure 7. Hole-searching and pose adjusting results: (a) 6R robot original pose; (b) 6R robot final pose.
Figure 7. Hole-searching and pose adjusting results: (a) 6R robot original pose; (b) 6R robot final pose.
Machines 12 00846 g007
Figure 8. Cross-shaped PiH insertion task contact diagram.
Figure 8. Cross-shaped PiH insertion task contact diagram.
Machines 12 00846 g008
Figure 9. PiH contact state classification by number of contact points: (a) One-point contact; (b) Two-point contact; (c) Three-point contact; (d) Four-point contact.
Figure 9. PiH contact state classification by number of contact points: (a) One-point contact; (b) Two-point contact; (c) Three-point contact; (d) Four-point contact.
Machines 12 00846 g009
Figure 10. The value of n z 0 in different contact states.
Figure 10. The value of n z 0 in different contact states.
Machines 12 00846 g010
Figure 11. PiH contact state classification by rotation error: (a) α < 0 ; (b) α > 0 .
Figure 11. PiH contact state classification by rotation error: (a) α < 0 ; (b) α > 0 .
Machines 12 00846 g011
Figure 12. One-point contact cases diagram: (a) Case 1-1-1; (b) Case 1-1-2; (c) Case 1-1-3; (d) Case 1-1-4; (e) Case 1-2-1; (f) Case 1-2-2; (g) Case 1-2-3; (h) Case 1-2-4.
Figure 12. One-point contact cases diagram: (a) Case 1-1-1; (b) Case 1-1-2; (c) Case 1-1-3; (d) Case 1-1-4; (e) Case 1-2-1; (f) Case 1-2-2; (g) Case 1-2-3; (h) Case 1-2-4.
Machines 12 00846 g012
Figure 13. Two-point contact cases diagram: (a) Case 2-1-1; (b) Case 2-1-2; (c) Case 2-1-3; (d) Case 2-1-4; (e) Case 2-2-1; (f) Case 2-2-2; (g) Case 2-2-3; (h) Case 2-2-4.
Figure 13. Two-point contact cases diagram: (a) Case 2-1-1; (b) Case 2-1-2; (c) Case 2-1-3; (d) Case 2-1-4; (e) Case 2-2-1; (f) Case 2-2-2; (g) Case 2-2-3; (h) Case 2-2-4.
Machines 12 00846 g013
Figure 14. Three-point contact cases diagram: (a) Case 3-1-1; (b) Case 3-1-2; (c) Case 3-1-3; (d) Case 3-1-4; (e) Case 3-3-1; (f) Case 3-2-2; (g) Case 3-2-3; (h) Case 3-2-4.
Figure 14. Three-point contact cases diagram: (a) Case 3-1-1; (b) Case 3-1-2; (c) Case 3-1-3; (d) Case 3-1-4; (e) Case 3-3-1; (f) Case 3-2-2; (g) Case 3-2-3; (h) Case 3-2-4.
Machines 12 00846 g014aMachines 12 00846 g014b
Figure 15. Four-point contact cases diagram: (a) Case 4-1-1; (b) Case 4-2-1.
Figure 15. Four-point contact cases diagram: (a) Case 4-1-1; (b) Case 4-2-1.
Machines 12 00846 g015
Figure 16. Case 1-1-1 force diagram.
Figure 16. Case 1-1-1 force diagram.
Machines 12 00846 g016
Figure 17. Three-layer BP neural network model.
Figure 17. Three-layer BP neural network model.
Machines 12 00846 g017
Figure 18. BP neural network model training results: (a) Angle error α around the Z axis; (b) Angle error β around the Y axis; (c) Angle error γ around the X axis.
Figure 18. BP neural network model training results: (a) Angle error α around the Z axis; (b) Angle error β around the Y axis; (c) Angle error γ around the X axis.
Machines 12 00846 g018aMachines 12 00846 g018b
Figure 19. Contact force and moment between the peg and the hole during PiH assembly: (a) Contact force variation; (b) Contact moment variation.
Figure 19. Contact force and moment between the peg and the hole during PiH assembly: (a) Contact force variation; (b) Contact moment variation.
Machines 12 00846 g019
Figure 20. Contact process during PiH assembly: (a) First contact; (b) Second contact; (c) Third contact.
Figure 20. Contact process during PiH assembly: (a) First contact; (b) Second contact; (c) Third contact.
Machines 12 00846 g020
Figure 21. The state of the system after completing the PiH assembly.
Figure 21. The state of the system after completing the PiH assembly.
Machines 12 00846 g021
Table 1. Force-moment sensor data in six poses.
Table 1. Force-moment sensor data in six poses.
No.6R Robot PoseForce (N)Moment (Nm)
1 0.0207 0.0000 0.9998 0.5423 0.0000 1.0000 0.0000 0.0015 0.9998 0.0000 0.0207 0.6277 0 0 0 1 9.552 0 0 0 0.308 0
2 0.9999 0.0000 0.0155 0.3187 0.0000 1.0000 0.0000 0.0015 0.0155 0.0000 0.9999 0.4142 0 0 0 1 0 0.001 9.552 0 0.001 0
3 0.9999 0.0148 0.0000 0.3219 0.0000 0.0000 1.0000 0.2193 0.0148 0.9999 0.0000 0.6356 0 0 0 1 0.085 9.553 0 0.308 0.006 0
4 0.4114 0.1916 0.8911 0.5707 0.4007 0.9161 0.0120 0.0484 0.8187 0.3521 0.4537 0.4670 0 0 0 1 7.802 3.319 3.955 0.124 0.244 0
5 0.4922 0.2466 0.8348 0.5616 0.3714 0.9269 0.0548 0.0590 0.7873 0.2830 0.5478 0.4038 0 0 0 1 7.695 2.747 4.947 0.0989 0.244 0
6 0.1457 0.1829 0.9723 0.6339 0.2244 0.9510 0.2125 0.1749 0.9635 0.2492 0.0975 0.5851 0 0 0 1 8.739 2.220 1.387 0.104 0.272 0
Table 2. Force-moment sensor value distribution of one-point contact cases.
Table 2. Force-moment sensor value distribution of one-point contact cases.
CaseFxFyFzMxMyMz
1-1-1 0 + 0
1-1-2 + 0 0 +
1-1-3 0 + 0
1-1-4 0 0
1-2-1 0 + 0 +
1-2-2 0 0 +
1-2-3 0 + 0 +
1-2-4 + 0 0 + +
Table 3. Force-moment sensor value distribution of two-point contact cases.
Table 3. Force-moment sensor value distribution of two-point contact cases.
CaseFxFyFzMxMyMz
2-1-1 + + +
2-1-2 + + +
2-1-3 +
2-1-4 +
2-2-1 + +
2-2-2 + +
2-2-3 + + + +
2-2-4 + + + +
Table 4. Force-moment sensor value distribution of three-point contact cases.
Table 4. Force-moment sensor value distribution of three-point contact cases.
CaseFxFyFzMxMyMz
3-1-1 0 + 0
3-1-2 0 0
3-1-3 0 + 0
3-1-4 + 0 0 +
3-2-1 0 + 0 +
3-2-2 + 0 0 + +
3-2-3 0 + 0 +
3-2-4 0 0 +
Table 5. Force-moment sensor value distribution of four-point contact cases.
Table 5. Force-moment sensor value distribution of four-point contact cases.
CaseFxFyFzMxMyMz
4-1-1 0 0 0 0
4-2-1 0 0 0 0 +
Table 6. Input training data of force-moment sensor values.
Table 6. Input training data of force-moment sensor values.
No.Fx (N)Fy (N)Fz (N)Mx (Nm)My (Nm)Mz (Nm)
1−19.115−0.949−5.448−0.031−1.2390.325
2−19.833−1.806−6.285−0.007−1.2260.375
3−17.606−0.126−4.485−0.088−1.0460.376
4−20.706−0.542−5.556−0.098−1.1610.266
5−15.283−1.343−4.8040.002−0.9770.481
6−20.073−0.809−5.593−0.063−1.2170.403
7−19.654−0.332−5.1450.101−1.1130.456
8−19.699−1.912−6.341−0.014−1.1770.399
9−18.935−0.707−5.251−0.067−1.1320.396
10−19.154−1.281−5.717−0.043−1.1300.398
Table 7. Output training data of angle errors.
Table 7. Output training data of angle errors.
No. α (rad) β (rad) γ (rad)
10.070.070.07
20.10.040.03
30.020.10.05
40.040.060.05
50.10.10.04
60.060.050.07
70.030.010.05
80.10.010.01
90.040.090.01
100.070.030.01
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ma, Z.; Hu, X.; Zhou, Y. Cross-Shaped Peg-in-Hole Autonomous Assembly System via BP Neural Network Based on Force/Moment and Visual Information. Machines 2024, 12, 846. https://doi.org/10.3390/machines12120846

AMA Style

Ma Z, Hu X, Zhou Y. Cross-Shaped Peg-in-Hole Autonomous Assembly System via BP Neural Network Based on Force/Moment and Visual Information. Machines. 2024; 12(12):846. https://doi.org/10.3390/machines12120846

Chicago/Turabian Style

Ma, Zheng, Xiaoguang Hu, and Yulin Zhou. 2024. "Cross-Shaped Peg-in-Hole Autonomous Assembly System via BP Neural Network Based on Force/Moment and Visual Information" Machines 12, no. 12: 846. https://doi.org/10.3390/machines12120846

APA Style

Ma, Z., Hu, X., & Zhou, Y. (2024). Cross-Shaped Peg-in-Hole Autonomous Assembly System via BP Neural Network Based on Force/Moment and Visual Information. Machines, 12(12), 846. https://doi.org/10.3390/machines12120846

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop