# Dynamic Workpiece Modeling with Robotic Pick-Place Based on Stereo Vision Scanning Using Fast Point-Feature Histogram Algorithm

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Research Methodology

#### 2.1. Experimental Devices and Setup

#### 2.2. Point Cloud Contrustion and Pre-Processing

#### 2.2.1. Stereo Calibration Principle

**R**, and translation,

**t**, between two RGB cameras of the Azure Kinect.

#### 2.2.2. Object Segmentation

Algorithm 1 RANSAC algorithm to find plane model |

Input: Point cloud and model estimation. |

Output: Plane Model $M$, which was rated best amongst all iterations |

While ($i\le maxIterations$) do |

Sample $k$ points; |

Estimate a plane model $M$; |

Compute model inliers; |

If ($M$ is better than $bestModel$) then |

$M\u2254bestModel$; |

$updateMaxIteration()$; |

End if; |

$i\u2254i+1$; |

End while; |

Return $bestModel$ |

Algorithm 2 Euclidean cluster extraction algorithm to extract the workpiece point cloud |

Input: Point cloud data $P$. |

Output: Point cloud clusters ${C}_{i}$ |

${C}_{i}\u2254\varnothing $;//list of clusters |

$Q\u2254\varnothing $;//list of checked points |

While (${p}_{i}\in P$) do |

${p}_{i}\to Q$; |

While (${p}_{i}\in Q$) do |

If (${r}_{{p}_{k}}<{d}_{th}$) then |

${p}_{k}\to {P}_{k}$; |

End if; |

While (${p}_{k}\in {P}_{k}$) |

If (${p}_{k}$ has not been processed) then |

${p}_{k}\to Q$; |

End if; |

End while; |

End while; |

If (all points ${p}_{i}\in Q$ are processed) then |

${C}_{i}\u2254Q$; |

$\mathrm{Q}\u2254\varnothing $; |

End if; |

End while |

#### 2.3. Pose Estimation System Construction

#### 2.3.1. Feature Point Descriptor

#### 2.3.2. Coarse Alignment Matching

#### 2.3.3. Finish Alignment Matching

#### 2.4. Position Error Compensation of Robot Arm

#### 2.5. Synchronization of Transmission Conveyor and Robot Arm

## 3. Experimental Results

#### 3.1. Experiment on the Accuracy of the Pose Estimation System

#### 3.2. Robot Error Compensation Results

#### 3.3. Experiment of the Dynamic Stack Workpiece Feeding System

## 4. Conclusions and Future Prospects

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## References

- Srasrisom, K.; Srinoi, P.; Chaijit, S.; Wiwatwongwana, F. Modeling, analysis and effective improvement of aluminum bowl embossing process through robot simulation tools. Procedia Manuf.
**2019**, 30, 443–450. [Google Scholar] [CrossRef] - Karbasian, H.; Tekkaya, A.E. A review on hot stamping. J. Mater. Process. Tech.
**2010**, 210, 2103–2118. [Google Scholar] [CrossRef] - Handreg, T.; Froitzheim, P.; Fuchs, N.; Flügge, W.; Stoltmann, M.; Woernle, C. Concept of an automated framework for sheet metal cold forming. In Proceedings of the 4th Kongresses Montage Handhabung Industrieroboter, Berlin/Heidelberg, Germany, 3 May 2019; pp. 117–127. [Google Scholar]
- Tölgyessy, M.; Dekan, M.; Chovanec, Ľ.; Hubinský, P. Evaluation of the Azure Kinect and Its Comparison to Kinect V1 and Kinect V2. Sensors
**2021**, 21, 413. [Google Scholar] [CrossRef] - Anwer, A.; Ali, S.S.A.; Khan, A.; Meriaudeau, F. Underwater 3-D Scene Reconstruction Using Kinect v2 Based on Physical Models for Refraction and Time of Flight Correction. IEEE Access
**2017**, 5, 15960–15970. [Google Scholar] [CrossRef] - Hänsch, R.; Weber, T.; Hellwich, O. Comparison of 3D Interest Point Detectors and Descriptors for Point Cloud Fusion. In Proceedings of the Photogrammetric Computer Vision, Zürich, Switzerland, 5–7 September 2014. [Google Scholar]
- Guo, N.; Zhang, B.H.; Zhou, J.; Zhan, K.T.; Lai, S. Pose estimation and adaptable grasp configuration with point cloud registration and geometry understanding for fruit grasp planning. Comput. Electron. Agric.
**2020**, 179, 105818. [Google Scholar] [CrossRef] - Zhang, Y.; Meng, J.; Sun, Y.; Wang, Q.; Wang, L.; Zheng, G. Research on the cooperative work of multi manipulator in hot stamping production line. In Proceedings of the 5th International Conference on Advanced Design and Manufacturing Engineering, Shenzhen, China, 19–20 September 2015. [Google Scholar]
- Lindner, M.; Schiller, I.; Kolb, A.; Koch, R. Time-of-Flight sensor calibration for accurate range sensing. Comput. Vis. Image Underst.
**2010**, 114, 1318–1328. [Google Scholar] [CrossRef] - Rathnayaka, P.; Baek, S.-H.; Park, S.-Y. An Efficient Calibration Method for a Stereo Camera System with Heterogeneous Lenses Using an Embedded Checkerboard Pattern. J. Sens.
**2017**, 2017, 1–12. [Google Scholar] [CrossRef] [Green Version] - Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM
**1981**, 24, 381–395. [Google Scholar] [CrossRef] - Rusu, R.B. Semantic 3D Object Maps for Everyday Manipulation in Human Living Environments. KI Künstliche Intell.
**2010**, 24, 345–348. [Google Scholar] [CrossRef] [Green Version] - Saval-Calvo, M.; Azorin-Lopez, J.; Fuster-Guillo, A.; Garcia-Rodriguez, J. Three-dimensional planar model estimation using multi-constraint knowledge based on k-means and RANSAC. Appl. Soft Comput.
**2015**, 34, 572–586. [Google Scholar] [CrossRef] [Green Version] - Hajebi, K.; Abbasi-Yadkori, Y.; Shahbazi, H.; Zhang, H. Fast Approximate Nearest-Neighbor Search with k-Nearest Neighbor Graph. In Proceedings of the 22nd International Joint Conference on Artificial Intelligence, Barcelona, Spain, 16–22 July 2011; pp. 1312–1317. [Google Scholar]
- Rusu, R.B.; Blodow, N.; Beetz, M. Fast Point Feature Histograms (FPFH) for 3D registration. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; pp. 3212–3217. [Google Scholar]
- Rusu, R.B.; Marton, Z.C.; Blodow, N.; Beetz, M. Learning informative point classes for the acquisition of object model maps. In Proceedings of the 2008 10th International Conference on Control, Automation, Robotics and Vision, Hanoi, Vietnam, 17–20 December 2008; pp. 643–650. [Google Scholar]
- Shi, X.; Peng, J.; Li, J.; Yan, P.; Gong, H. The Iterative Closest Point Registration Algorithm Based on the Normal Distribution Transformation. Procedia Comput. Sci.
**2019**, 147, 181–190. [Google Scholar] [CrossRef] - Besl, P.J.; McKay, N.D. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell.
**1992**, 14, 239–256. [Google Scholar] [CrossRef] - Ji-ming, Z. Research on nonlinear correction algorithm of two-dimensional PSD based on ploynominals. Ship Sci. Technol.
**2019**, 41, 89–93. [Google Scholar]

**Figure 8.**Setup of standard positions for different estimation poses of the dynamic workpiece using a predetermined dimension block for (

**a**) the X-axis and Y-axis and (

**b**) the Z-axis.

**Figure 9.**Setup of standard rotation angles for different estimation poses of the dynamic workpiece using the professional laser rangefinder of BOSCH.

**Figure 10.**Building reference point cloud models of each 6-DoF pose for estimating measured clouds of the dynamic workpiece in the (

**a**) i-axis, (

**b**) Y-axis, (

**c**) Z-axis, (

**d**) RX-axis, (

**e**) RY-axis, and (

**f**) RZ-axis.

**Figure 11.**Translation errors of workpiece modeling at different estimated poses for the (

**a**) XY-axis and the (

**b**) Z-axis.

**Figure 12.**Rotation errors of workpiece modeling at different estimated poses for (

**a**) the RX and RY axes and (

**b**) the RZ-axis.

**Figure 13.**Repetition error and error percentage of the 6-DoF pose estimation system: (

**a**) translations of X, Y, and Z axes, and (

**b**) rotations of RX, RY, and RZ axes.

**Figure 15.**Positioning errors of the robot on the XY plane (

**a**) before compensation and (

**b**) after compensation.

**Figure 16.**Position differences of the robot before and after compensation using the non-linear correction algorithm for robot pick-place.

**Figure 17.**Flow chart of pose estimation and robot control programs for the dynamic workpiece feeding control system.

**Figure 18.**Dynamic single object on the conveyor (

**a**) in a random position, (

**b**) automatically identifying the RGB point cloud scene of the object, (

**c**) estimating the object 6-DoF poses, and (

**d**) the robot grasping the object in the dynamic experiment.

**Figure 19.**Dynamic piled workpieces on the conveyor (

**a**) in a random position, (

**b**) automatically identifying the RGB point cloud scene of a top object, (

**c**) estimating the object 6-DoF poses of a top object only, and (

**d**) robot grasping the top object in the dynamic experiment.

Item | Static | Dynamic | ||
---|---|---|---|---|

Case 2 | Case 1 | Case 2 | Case 1 | |

Number of experiments | 20 | 20 | 20 | 20 |

Successful case | 18 | 19 | 13 | 14 |

Failure case | 3 | 1 | 7 | 6 |

Successful rate (%) | 90 | 95 | 65 | 70 |

Pose estimation time (seconds) | 12 | 7 | 12 | 7 |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Do, Q.-T.; Chang, W.-Y.; Chen, L.-W.
Dynamic Workpiece Modeling with Robotic Pick-Place Based on Stereo Vision Scanning Using Fast Point-Feature Histogram Algorithm. *Appl. Sci.* **2021**, *11*, 11522.
https://doi.org/10.3390/app112311522

**AMA Style**

Do Q-T, Chang W-Y, Chen L-W.
Dynamic Workpiece Modeling with Robotic Pick-Place Based on Stereo Vision Scanning Using Fast Point-Feature Histogram Algorithm. *Applied Sciences*. 2021; 11(23):11522.
https://doi.org/10.3390/app112311522

**Chicago/Turabian Style**

Do, Quoc-Trung, Wen-Yang Chang, and Li-Wei Chen.
2021. "Dynamic Workpiece Modeling with Robotic Pick-Place Based on Stereo Vision Scanning Using Fast Point-Feature Histogram Algorithm" *Applied Sciences* 11, no. 23: 11522.
https://doi.org/10.3390/app112311522