Next Article in Journal
Optimal H Control for Lateral Dynamics of Autonomous Vehicles
Next Article in Special Issue
Deep ConvNet: Non-Random Weight Initialization for Repeatable Determinism, Examined with FSGM
Previous Article in Journal
An Explainable AI-Based Fault Diagnosis Model for Bearings
Previous Article in Special Issue
Soft-Sensor for Class Prediction of the Percentage of Pentanes in Butane at a Debutanizer Column
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Camera-Based Position Correction System for Autonomous Production Line Inspection

1
Department of Mechanical Engineering, National Taiwan University of Science and Technology, Taipei 106, Taiwan
2
Center for Cyber-Physical System Innovation, National Taiwan University of Science and Technology, Taipei 106, Taiwan
3
Taiwan Building Technology Center, National Taiwan University of Science and Technology, Taipei 106, Taiwan
*
Authors to whom correspondence should be addressed.
Sensors 2021, 21(12), 4071; https://doi.org/10.3390/s21124071
Submission received: 30 May 2021 / Revised: 10 June 2021 / Accepted: 12 June 2021 / Published: 13 June 2021
(This article belongs to the Collection Robotics, Sensors and Industry 4.0)

Abstract

:
Visual inspection is an important task in manufacturing industries in order to evaluate the completeness and quality of manufactured products. An autonomous robot-guided inspection system was recently developed based on an offline programming (OLP) and RGB-D model system. This system allows a non-expert automatic optical inspection (AOI) engineer to easily perform inspections using scanned data. However, if there is a positioning error due to displacement or rotation of the object, this system cannot be used on a production line. In this study, we developed an automated position correction module to locate an object’s position and correct the robot’s pose and position based on the detected error values in terms of displacement or rotation. The proposed module comprised an automatic hand–eye calibration and the PnP algorithm. The automatic hand–eye calibration was performed using a calibration board to reduce manual error. After calibration, the PnP algorithm calculates the object position error using artificial marker images and compensates for the error to a new object on the production line. The position correction module then automatically maps the defined AOI target positions onto a new object, unless the target position changes. We performed experiments that showed that the robot-guided inspection system with the position correction module effectively performed the desired task. This smart innovative system provides a novel advancement by automating the AOI process on a production line to increase productivity.

1. Introduction

The development of industrial automation became quite mature by the 1970s, and many companies have been gradually introducing automated production technology to assist production lines since [1]. The automation of the production line has enabled the manufacturing process to be more efficient, data-based, and unified. Automation technologies have heavily influenced many processes, from loading and unloading objects to sorting, assembly, and packaging [2].
In 2012, the German government proposed the concept of Industry 4.0 [3]. Many companies have started to develop automation technologies, from upper layer software or hardware providers and middle layer companies that store, analyze, manage, and provide solutions, to lower layer companies, so that they can apply these technologies in their factories [4]. In addition, a smarter integrated sensing and control system based on existing automation technologies has begun to evolve as part of the development of a highly automated or even fully automated production model [5].
In modern automation technology, development is mainly based on a variety of algorithms and robot arm control systems [6]. The robotic arm has the advantages of high precision, fast-moving speed, and versatile motion. In advanced applications, extended sensing modules, such as vision modules, distance sensors, or force sensors, can be integrated with the robotic arm to give human sense to robotic systems [7].
Robot systems in general have the ability to “see” things through various vision systems [8]. These help robots to detect and locate objects, which speeds up the technical process. In addition, they improve the processing accuracy and expand the range of applications for automated systems. The type of data obtained can be approximately divided into a 2D vision system (RGB/Mono Camera system), which receives planar images [9], and a 3D vision system (RGB-D Camera system), which receives depth images in the form of vision modules [10]. These two systems have their own advantages and disadvantages and suitable application scenarios. In the field of industrial manufacturing automation, the 2D vision system is commonly used to meet the needs of high precision, ease of installation, and use.
The robotic arm with a 2D vision camera is the most commonly used setup in today’s industrial automation applications. Pick and place [11] and random bin picking [12] are among the most frequent applications that use an industrial camera with a manipulator. Several open-source and commercial toolboxes, such as OpenCV [13] and Halcon [14], are already available for use in vision systems. However, regardless of how mature the software support is, a lot of hardware is still needed with human intervention to complete the process. Camera calibration is needed before operation and is applied to various image processing algorithms so that important parameters, such as focal length, center of image, intrinsic parameter, extrinsic parameter, and lens distortion [15,16], can be learnt. Furthermore, camera calibration is a challenging and time-consuming process for an operator unfamiliar with the characteristics of the camera on the production line. Camera calibration is a major issue that is not easily solved in the automation industry and complicates the introduction of production line automation. Traditional vision-based methods [17,18,19,20] require 3D fixtures corresponding to a reference coordinate system to calibrate a robot. These methods are time-consuming, inconvenient, and may not be feasible in some applications.
A camera installed in the working environment or mounted on a robotic arm can be categorized as an eye-to-hand (camera-in-hand) calibration or stand-alone calibration [21]. The purpose of the hand–eye correction is similar to that of the robot arm end-point tool calibration (TCP calibration), which obtains a convergence homogeneous matrix between the robot end-effector and the tool end-point [22]. However, unlike TCP calibration, it can be corrected by using tools to touch fixed points in different positions. To obtain a hand–eye conversion matrix, the visual systems use different methods, such as the parametrization of a stochastic mode [23] and dual-quaternion parameterization [24], since the actual image center cannot be used. For self-calibration methods, the camera is rigidly linked to the robot end-effector [25]. A vision-based measurement device and a posture measuring device have been used in a system that captures robot position data to model manipulator stiffness [26] and estimate kinematic parameters [27,28,29]. The optimization technique is based on the end-effector’s measured positions. However, these methods require offline calibration, which is a limitation. In such systems, accurate camera model calibration and robot kinematics model calibration are required for accurate positioning. The camera calibration procedure required to achieve high accuracy is, therefore, time-consuming and expensive [30].
Positioning of the manufactured object is an important factor in industrial arm applications. If the object is not correctly positioned, it may cause assembly failure or destroy the object. Consequently, the accuracy of object positioning often indirectly influences the processing accuracy of the automated system. Although numerous studies have been conducted to define object positioning accurately based on vision systems [31,32,33], no system has been found with an offline programing platform to perform AOI inspection on a production line. Table 1 shows a comparison between the system we propose here and to existing vision-based position correction systems in terms of performance. The proposed system expedites the development of a vision-based object position correction module for a robot-guided inspection system. This allows the robot-guided inspection system to complete AOI tasks automatically in a production line regardless of the object’s position. In the proposed system, the AOI targets can be automatically mapped onto a new object for the inspection task, whereas existing vision systems have been developed to locate the object’s position but not for the production line. To operate these systems, the operator needs to be skillful, and integration with the other system is a tedious process. Furthermore, user-defined robot target positions cannot be updated for inspection if there is a change in the object’s position, which makes it more challenging to perform tasks on the production line. The proposed position correction system is capable of self-calibration and can update the object position and AOI targets automatically in the production line.
Here, we propose a novel approach to automate manufacturing systems for various applications in order to solve the object position error encountered on the production line. We developed an automated position correction module to locate an object’s position and adjust the robot pose and position in relation to the detected error values on displacement or rotation. The proposed position correction module is based on an automatic hand–eye calibration and the PnP algorithm. The automatic hand–eye calibration was performed using a calibration board to reduce manual error, whereas the PnP algorithm calculates the object position error using artificial marker images. The position correction module identifies the object’s current position and then measures and adjusts the robot work points for a defined task. This developed module was integrated with the autonomous robot-guided inspection system to build a smart system to perform AOI tasks on the production line. The robot-guided inspection system based on the offline programming (OLP) platform was developed by integrating a 2D/3D vision module [34]. In addition, the position correction module maps the defined AOI target positions to a new object unless they are changed. The effectiveness and robustness of the proposed system was indicated by conducting two tests and comparing captured images with sets of standard images. This innovative system minimizes human effort and time consumption to expedite the AOI setup process in the production line, thereby increasing productivity.
The remainder of this paper is organized as follows: in Section 2, we give an overview of the position correction system integration with robot-guided inspection architecture; in Section 3, we provide an overview of the position correction system and introduce the proposed method; in Section 4, we detail the integration of the position correction module with the OLP platform and report the system performance; in Section 5, we report the conclusions of the proposed system.

2. System Overview

The robot-guided inspection system was designed and developed with the vision module by Amit et al. [34]. The robot-guided inspection system, shown in the blue dashed boxes in Figure 1, consists of an OLP platform and vision module. The OLP platform was designed and developed using OCC open source libraries to generate a robot trajectory for 3D scanning and to define AOI target positions using CAD information [35]. The robot-guided inspection efficiently performs AOI planning tasks using only scanned data and does not require a physical object or an industrial manipulator. This developed system can also be used in different production lines based on robot-guided inspection. However, the developed system is not comprehensive enough to be used in an assembly line to perform reliable AOI tasks. Therefore, the robot-guided inspection system was integrated with the position correction module (red dashed boxes in Figure 1) to resolve issues related to object displacement or rotation errors in a production line for AOI tasks. Figure 1 presents a complete overview of the proposed system architecture, which includes the OLP platform, vision module, and position correction module. In this study, the primary objective of the position correction system was to calculate the rotation and translation of the new object over the production line using artificial markers on the object. Moreover, the proposed system was developed to minimize the complexity of hand–eye calibration and position correction within the production line. In addition, we aimed for the user of the integrated autonomous robot-guided system to not have to define the AOI target positions unless they are changed. This would not only save time and effort, but increase productivity. The proposed position correction system consisted of a simple hand–eye calibration method for the development of the position correction method.
The flowchart shown in Figure 2 explains the automatic hand–eye calibration method, which is part of the position correction system. The calibration method was divided into three main stages: “environment setup and initial settings”, “robot scan and trajectory planning”, and “image capture based on the scan trajectory”. In the environment setup and initialization phase, the user must prepare the environment for calibration by measuring the position of the calibration board in the workspace, helping the arm see the calibration board and providing other simple basic settings. The robot scan and trajectory planning stage recorded the optimal end-effector position, while capturing the calibration board image at each position during the image captures based on the scan trajectory stage. The environment setup and initial settings is the only part of the system that requires manual operation (Figure 2). The proposed calibration method was implemented to initiate the position correction module to measure and compensate for the position error before defining the AOI target positions on new objects.
The flowchart of the proposed object correction methodology is presented in Figure 3 and is divided into the offline registration process and the real-time online object positioning. In the offline process, the user can specify the robot’s artificial marker detection position, design the marker pattern, and work points. In the online process, the system takes a picture of the artificial marker at the specific position based on the user’s offline settings. Subsequently, the system identifies the current object position and autonomously measures and adjusts the robot work points for the AOI inspection task. The position correction system is then integrated with the autonomous robot-guided optical inspection system to demonstrate the performance of the proposed system.

3. Overview of the Position Correction System

In this study, an image-based object position correction system was designed and developed. Object positioning has always been a key component of the automated manufacturing process. The proposed position correction system was developed based on a calibration and a position correction methodology. The calibrated camera, with a specific artificial marker on the object for PnP image processing, identifies and defines the position [36,37], as shown in Figure 4. A transformation matrix T is then obtained from the marker coordinate (Pmarker) to the camera coordinate (PCamera). The displacement error of the work point is determined by the difference between the object before and after the transformation matrix. The system must then compensate for the position error before defining the AOI target positions on the new object. The system simulation results were evaluated before being integrated with the robot-guided inspection system. This system assists the OLP platform with performing robot-guided AOI applications to automatically inspect misplaced components of manufactured objects on a production line.

3.1. Automatic Robot Hand–Eye Calibration Methodology

We evaluated the performance of the proposed automatic hand–eye calibration process shown in Figure 2. The first step of this process was to measure the position of the calibration board in the work space. The calibration board (an 8 × 6 chess board) was fixed on the robot arm and moved closer to the second calibration board on the table [38,39], as shown in Figure 5. We then adjusted the robot arm and camera position so that both calibration boards were visible simultaneously. Two transformation matrices, A and B, were used to calculate the relationship between the calibration boards. Transformation matrix A was obtained using the robot arm controller to record the end-effector coordinates. Transformation matrix B was calculated by solving the camera image PnP equations. The transformation matrix between the robot arm and calibration board 2 was calculated after determining the relationship between A and B, as shown in Figure 5.
Once the transformation matrix was obtained, the initial position of the robot was adjusted to perform automatic hand–eye calibration, as shown in Figure 6. The proposed system enabled the robot to follow the hand–eye calibration trajectory and take pictures of the calibration board at different robot positions, as shown in the Figure 7. Table 2 presents each position of the robot end-effector relative to the robot arm’s initial position and the hand–eye calibration results are shown in Table 3. Using these results, the conversion matrix between the robot arm end-effector and the camera connector was calculated and compared with the conversion matrix derived from the original connector design. The proposed calibration method had an error on the z-axis of around 2 mm and a greater error of nearly 9 mm on the x-axis. The results obtained from the hand–eye correction were sufficiently similar to the designed connector results. Further analysis was performed to identify potential causes of calibration error in order to make further improvements. During this process, errors may have occurred at the 3D printed connector, and the connector center to place the camera was difficult to estimate. In addition, the hand–eye calibration itself is a complex process, and the proposed approach uses the simple AX = ZB closed conversion relationship for inference, so there is a possibility of error. The calibration results obtained were used in the proposed position correction methodology to compute and compensate for the position error on a new object in a production line.

3.2. Object Position Correction Methodolgy

After obtaining the calibration results, the user must set the checkpoint position (PCheck) for the robot arm with the camera in the initial “template login phase” to capture the full artificial marker image. The robot arm moves to the (PCheck) position and uses the camera to detect the artificial marker and solve the PnP image problem. Therefore, the transformation matrix T of the artificial marker coordinate (Pmarker) and camera coordinate (PCamera) was obtained using Equation (1).
P C a m e r a = [ T ] P M a r k e r
where T is a standard position (TS) and is used as a standard sample to verify and measure the change in object position.
Once the standard position has been obtained, it undergoes the “error compensation phase”, which compensates for the object’s position error during various manufacturing applications. In practice, there are several different work points for various processes of AOI tasks, machining, and assembly applications. The positions of these work points are recorded and are collectively known as P. If the object shifts during the process, the robot arm moves to the checkpoint position (PCheck) to detect and resolve the image PnP problem and obtain a new marker position (Pmarker). This will calculate a new transformation matrix TN between the new marker coordinate Pmarker and camera coordinate PCamera.
The offset transformation matrix TD shown in Equation (2) shows the displacement and rotation of the manual markers and is calculated using the standard transformation matrix TS and the new transformation matrix TN.
P C a m e r a = [ T N ]   P M a r k e r 2 = [ T S ]   P M a r k e r 1 P M a r k e r 2 = [ T N ] 1 [ T S ]   P M a r k e r 1 = [ T D ]   P M a r k e r 1 [ T D ] = [ T N ] 1 [ T S ]
The rotation and translation error for defining new work points, PNew, were calculated using the offset transformation matrix, artificial marker center point offset (S), and the robot arm’s original coordinates.
Following this, the work point P is translated back to the origin of the robot arm coordinate system with the artificial marker center point, as shown in Equation (3):
P N e w = [ O S O 1 ] P
Work point P is then rotated with reference to the artificial marker rotation:
P N e w = [ R 3 × 3 0 O 1 ] [ O S O 1 ] P
Work point P is translated back to the original artificial coordinate, as shown in Equation (5):
P N e w = [ O S O 1 ] [ R 3 × 3 0 O 1 ] [ O S O 1 ] P
Using Equation (6), a new work point (PNew) is defined for the AOI task after error compensation:
P N e w = [ O t 3 × 1 O 1 ] [ O S O 1 ] [ R 3 × 3 0 O 1 ] [ O S O 1 ] P
In summary, translation and rotation error are computed once the object positioning system recognizes and evaluates the artificial marker in the actual application. Based on the proposed approach, this position correction system successfully adjusts and generates new work point coordinates for AOI inspection tasks.
The schematic diagram shown in Figure 8 was generated using MATLAB software and was based on the proposed position correction system. In the simulation, the computer server board (object) was moved to an unknown position, after which the position correction system identified the AOI camera target point based on the artificial marker. The performance of the position correction system was high for the AOI target point and object position, regardless of the error caused by the hardware device. The system successfully found the position error and compensated for it to calculate the robot’s new AOI target point, irrespective of the object position. Thus, the object positioning correction system effectively utilized the automatic hand–eye calibration method and simple artificial markers to detect the object’s current position prior to any shifting. Section 4 discusses the integration and implementation of the proposed method in the AOI application, as well as the experimental results.

4. Integration of Position Correction Module with OLP Platform

The position correction system we developed was integrated with the autonomous robot-guided optical inspection system to build a smart system for the production line. Prior to performing position correction, a path for real-time scanning and target positions for AOI tasks was generated and visualized graphically in the OLP platform, as shown in Figure 9 [34]. Furthermore, the generated robot program was sent to the HIWIN-620 industrial robot to capture AOI images, which were compared to virtual images. However, the developed system was still unable to reliably perform AOI tasks in a production line. Therefore, the robot-guided inspection system was integrated with the position correction module to resolve the issues related to object displacement or rotation errors in a production line for AOI tasks.
Here, we report and discuss experiments performed using the proposed position correction module for robot-guided inspection. On a production line, the robot arm employs the object position correction module to perform an AOI operation autonomously. To execute an AOI inspection, the robot arm gathers photos of the target object from various angles. If the object shifts, the proposed system detects this and adjusts the robot’s AOI image shooting position. Once positional changes are made, the system captures the defined AOI target images. The object used in this experiment was a large computer server with four artificial markers, as shown in Figure 10.
During the system execution process, we first manually guided the robot arm to shoot the positioning marker image, as shown in Figure 11. We obtained the sample image of the positioning marker as a reference for the position correction system, as shown in Figure 12, and recorded the positioning marker position and its transformation matrix relative to the camera.
Once the sample image of the positioning marker was captured, the robot’s target positions were selected for the AOI inspection task before any displacement or rotation. In the experiment, four different robot shooting positions were selected at different heights and angles, as shown in Figure 13, and the images captured at these positions are shown in Figure 14. These were the target points used to perform the position correction procedure.
Following this preparatory procedure, the system already had all of the parameters and specification data required to perform object image repositioning. Figure 14 shows the original standard image without displacement. Two experiments with random manual displacement were conducted to evaluate the performance of the proposed position correction module, as shown in Figure 15. Once the manual displacement was carried out, the proposed position correction modules calculated the new AOI position to capture images, as shown in Figure 16, Figure 17, Figure 18 and Figure 19. These were then compared with the standard images shown in Figure 14.
Two experiments were conducted to measure the performance of the position correction system. In test 1, the object was moved by 49.1 mm, with a final average residual error of 5.15 mm, and the proposed system compensated for 91.5% of the position error. In test 2, the object was moved 50.6 mm, with a final average residual error of 3.80 mm, and the proposed system compensated for 92.5% of the position error. New images were captured and compared with the standard images for the two random object positions. The captured images differed slightly from the standard image. These results demonstrate that the proposed system efficiently compensates for most of the error caused by object displacement. Table 4 presents the error analysis of the object position correction system, in which new images have a distance error (norm) of 10–40 pixels compared to the standard image (Figure 20), with an average of 21.09 pixels. The errors measured in pixels were then converted to mm (Table 5). The distance error was between 6 and 2 mm (Figure 21) and the mean error was 3.97 mm. The proposed method’s efficiency could be improved by accurately positioning the camera and replacing the 3D print camera holder, as well as minimizing calibration error, which may improve system accuracy by up to 95.3%. The developed system resolved the issues associated with the object translation or rotation error in the production line. The robot-guided inspection system with position correction module enhanced the ability to perform user-defined AOI tasks autonomously on any production line.

5. Conclusions

In this study, a novel position correction module was developed and integrated with an autonomous robot-guided optical inspection system to perform AOI tasks on production lines and correct for object displacement and rotation. The robot-guided system assisted the user to select the AOI targets and capture target images in the virtual environment. Real-time images were captured using the industrial manipulator for the corresponding positions. However, this system was still not reliable enough to be used in an assembly line to perform AOI tasks. Therefore, the robot-guided inspection system was integrated with a position correction module to resolve object displacement or rotation error issues in a production line for AOI tasks. The position correction system calculates and compensates for the position error of the new object on the production line using artificial markers. We performed two tests to evaluate the effectiveness of the proposed position correction module. The proposed system had a mean error of 3.97 mm or 21.09 pixels. These results indicated that the robot-guided system with a position correction module was capable of performing AOI tasks on a production line. In addition, the user of the integrated autonomous robot-guided system is not required to define the AOI target on a new object position unless they are changed. This not only saves time and effort, but also increases productivity. The integration of the position correction module with the robot-guided system led to the development of a smart system, which enables inspection tasks on the production line without prior knowledge of the object’s position. However, the proposed method is currently limited to using artificial markers and the simple AX = ZB closed conversion for hand–eye calibration.

Author Contributions

Conceptualization, C.-Y.L.; funding acquisition, C.-Y.L.; investigation, A.K.B., A.M.M., and S.-C.L.; methodology, A.K.B. and C.-Y.L.; software, A.K.B., A.M.M., and S.-C.L.; supervision, C.-Y.L.; validation, A.K.B. and S.-C.L.; visualization, A.K.B.; writing—original draft, A.K.B. and Y.-S.C.; writing—review and editing, A.K.B. and C.-Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was financially supported by both the Taiwan Building Technology Center and the Center for Cyber-Physical System Innovation from the Featured Areas Research Center Program within the framework of the Higher Education Sprout Project by the Ministry of Education (MOE) in Taiwan. Additionally, this work was financially supported by Inventec Co. Ltd., Taiwan (R.O.C).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. De Backer, K.; DeStefano, T.; Menon, C.; Suh, J.R. Industrial Robotics and the Global Organisation of Production; OECD Science, Technology and Industry Working Papers, No. 2018/03; OECD Publishing: Paris, France, 2018. [Google Scholar]
  2. Tripathi, S.; Shukla, S.; Attrey, S.; Agrawal, A.; Bhadoria, V.S. Smart industrial packaging and sorting system. In Strategic System Assurance and Business Analytics; Springer: Singapore, 2020; pp. 245–254. [Google Scholar]
  3. Vaidya, S.; Ambad, P.; Bhosle, S. Industry 4.0–a glimpse. Procedia Manuf. 2018, 20, 233–238. [Google Scholar] [CrossRef]
  4. Marcon, P.; Arm, J.; Benesl, T.; Zezulka, F.; Diedrich, C.; Schröder, T.; Belyaev, A.; Dohnal, P.; Kriz, T.; Bradac, Z. New approaches to implementing the SmartJacket into industry 4.0. Sensors 2019, 19, 1592. [Google Scholar] [CrossRef] [Green Version]
  5. Bahrin, M.A.K.; Othman, M.F.; Azli, N.H.N.; Talib, M.F. Industry 4.0: A review on industrial automation and robotic. Jurnal Teknologi 2016, 78, 6–13. [Google Scholar]
  6. Ermolov, I. Industrial robotics review. In Robotics: Industry 4.0 Issues & New Intelligent Control Paradigms; Springer: Cham, Germany, 2020; pp. 195–204. [Google Scholar]
  7. Pérez, L.; Rodríguez, Í.; Rodríguez, N.; Usamentiaga, R.; García, D.F. Robot guidance using machine vision techniques in industrial environments: A comparative review. Sensors 2016, 16, 335. [Google Scholar] [CrossRef]
  8. Forsyth, D.A.; Ponce, J. Computer Vision: A Modern Approach, 2nd ed.; Pearson Education: New York, NY, USA, 2012. [Google Scholar]
  9. Ali, M.H.; Aizat, K.; Yerkhan, K.; Zhandos, T.; Anuar, O. Vision-based robot manipulator for industrial applications. Procedia Comput. Sci. 2018, 133, 205–212. [Google Scholar] [CrossRef]
  10. Nakhaeinia, D.; Fareh, R.; Payeur, P.; Laganière, R. Trajectory planning for surface following with a manipulator under RGB-D visual guidance. In Proceedings of the 2013 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), Linkoping, Sweden, 21–26 October 2013; pp. 1–6. [Google Scholar]
  11. Fan, X.; Wang, X.; Xiao, Y. A combined 2D-3D vision system for automatic robot picking. In Proceedings of the 2014 International Conference on Advanced Mechatronic Systems, Kumamoto, Japan, 10–12 August 2014; pp. 513–516. [Google Scholar]
  12. Kim, K.; Kim, J.; Kang, S.; Kim, J.; Lee, J. Vision-based bin picking system for industrial robotics applications. In Proceedings of the 2012 9th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), Daejeon, Korea, 26–28 November 2012; pp. 515–516. [Google Scholar]
  13. Camarillo, D.B.; Loewke, K.E.; Carlson, C.R.; Salisbury, J.K. Vision based 3-D shape sensing of flexible manipulators. In Proceedings of the 2008 IEEE International Conference on Robotics and Automation, Pasadena, CA, USA, 19–23 May 2008; pp. 2940–2947. [Google Scholar]
  14. Lin, Z.; Shao, J.; Zhang, X.; Deng, X.; Zheng, T. Multiclass Fruit Packing System Based on Computer Vision. J. Phys. Conf. Ser. 2020, 1449, 012097. [Google Scholar] [CrossRef]
  15. Heikkila, J.; Silven, O. A four-step camera calibration procedure with implicit image correction. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, PR, USA, 17–19 June 1997; pp. 1106–1112. [Google Scholar]
  16. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef] [Green Version]
  17. Tsai, R. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J. Robot. Autom. 1987, 3, 323–344. [Google Scholar] [CrossRef] [Green Version]
  18. Zhuang, H.; Roth, Z.S.; Wang, K. Robot calibration by mobile camera systems. J. Robot. Syst. 1994, 11, 155–167. [Google Scholar] [CrossRef]
  19. Renaud, P.; Andreff, N.; Marquet, F.; Martinet, P. Vision-based kinematic calibration of a H4 parallel mechanism. In Proceedings of the 2003 IEEE International Conference on Robotics and Automation, Taipei, Taiwan, 14–19 September 2003; Volume 14, pp. 1191–1196. [Google Scholar]
  20. Daney, D.; Andreff, N.; Papegay, Y. Interval method for calibration of parallel robots: A vision-based experimentation. Mech. Mach. Theory 2006, 41, 929–944. [Google Scholar] [CrossRef]
  21. Carlson, F.B.; Johansson, R.; Robertsson, A. Six DOF eye-to-hand calibration from 2D measurements using planar constraints. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September 2015; pp. 3628–3632. [Google Scholar]
  22. Bayro-Corrochano, E.; Daniilidis, K.; Sommer, G. Motor algebra for 3D kinematics: The case of the hand-eye calibration. J. Math. Imaging Vis. 2000, 13, 79–100. [Google Scholar] [CrossRef]
  23. Strobl, K.H.; Hirzinger, G. Optimal hand-eye calibration. In Proceedings of the 2006 IEEE/RSJ international conference on intelligent robots and systems, Beijing, China, 9–15 October 2006; pp. 4647–4653. [Google Scholar]
  24. Daniilidis, K. Hand-eye calibration using dual quaternions. Int. J. Robot. Res. 1999, 18, 286–298. [Google Scholar] [CrossRef]
  25. Motta, J.M.S.T.; De Carvalho, G.C.; Mcmaster, R.S. Robot calibration using a 3D vision-based measurement system with a single camera. Robot. Comput. Integr. Manuf. 2001, 17, 487–497. [Google Scholar] [CrossRef]
  26. Alici, G.; Shirinzadeh, B. Enhanced stiffness modeling, identification and characterization for robot manipulators. IEEE Trans. Robot. 2005, 21, 554–564. [Google Scholar] [CrossRef] [Green Version]
  27. Newman, W.; Birkhimer, C.; Horning, R. Calibration of a motomanP8 robotbased on laser tracking. In Proceedings of the IEEE Conference on Robotics Andautomation, San Francisco, CA, USA, 24–28 April 2000; pp. 3597–3602. [Google Scholar]
  28. Omodei, A.; Legnani, G.; Adamini, R. Three methodologies for the calibration of industrial manipulators: Experimental results on a SCARA robot. J. Robot. Syst. 2000, 17, 291–307. [Google Scholar] [CrossRef]
  29. Alici, G.; Shirinzadeh, B. A systematic technique to estimate positioning errors for robot accuracy improvement using laser interferometry-based sensing. Mech. Mach. Theory 2005, 40, 879–906. [Google Scholar] [CrossRef]
  30. Zhang, B.; Wang, J.; Rossano, G.; Martinez, C. Vision-guided robotic assembly using uncalibrated vision. In Proceedings of the IEEE International Conference on Mechatronics and Automation, Beijing, China, 7–10 August 2011; pp. 1384–1389. [Google Scholar]
  31. Meng, Y.; Zhuang, H. Autonomous robot calibration using vision technology. Robot. Comput. Integr. Manuf. 2007, 23, 436–446. [Google Scholar] [CrossRef]
  32. Michalos, G.; Makris, S.; Eytan, A.; Matthaiakis, S.; Chryssolouris, G. Robot path correction using stereo vision system. Procedia Cirp. 2012, 3, 352–357. [Google Scholar] [CrossRef] [Green Version]
  33. Luo, Z.; Zhang, K.; Wang, Z.; Zheng, J.; Chen, Y. 3D pose estimation of large and complicated workpieces based on binocular stereo vision. Appl. Optics. 2017, 56, 6822–6836. [Google Scholar] [CrossRef]
  34. Bedaka, A.K.; Mahmoud, A.M.; Lee, S.C.; Lin, C.Y. Autonomous robot-guided inspection system based on offline programming and RGB-D model. Sensors 2018, 18, 4008. [Google Scholar] [CrossRef] [Green Version]
  35. Bedaka, A.K.; Lin, C.Y.; Huang, S.T. Autonomous Cad Model–Based Industrial Robot Motion Planning Platform. Int. J. Robot. Autom. 2019, 34. [Google Scholar] [CrossRef]
  36. Quan, L.; Lan, Z. Linear n-point camera pose determination. IEEE Trans. Pattern Anal. Mach. Intell. 1999, 21, 774–780. [Google Scholar] [CrossRef] [Green Version]
  37. Horaud, R.; Conio, B.; Leboulleux, O.; Lacolle, B. An analytic solution for the perspective 4-point problem. Comput. Vis. Graph. Image Process. 1989, 47, 33–44. [Google Scholar] [CrossRef] [Green Version]
  38. De la Escalera, A.; Armingol, J.M. Automatic chessboard detection for intrinsic and extrinsic camera parameter calibration. Sensors 2010, 10, 2027–2044. [Google Scholar] [CrossRef]
  39. Du, G.; Zhang, P. Online robot calibration based on vision measurement. Robot. Comput. Integr. Manuf. 2013, 29, 484–492. [Google Scholar] [CrossRef]
Figure 1. Overview of the position correction module integration with the robot-guided inspection system architecture.
Figure 1. Overview of the position correction module integration with the robot-guided inspection system architecture.
Sensors 21 04071 g001
Figure 2. Flowchart of the hand–eye calibration methodology.
Figure 2. Flowchart of the hand–eye calibration methodology.
Sensors 21 04071 g002
Figure 3. Flowchart of the object position correction methodology.
Figure 3. Flowchart of the object position correction methodology.
Sensors 21 04071 g003
Figure 4. Artificial marker attached to the object.
Figure 4. Artificial marker attached to the object.
Sensors 21 04071 g004
Figure 5. The relationship between robot and two calibration boards.
Figure 5. The relationship between robot and two calibration boards.
Sensors 21 04071 g005
Figure 6. Initial steps of the calibration process. (a) Initial robot arm position, (b) image captured w.r.t. initial position, (c) robot arm position after being manually adjusted, (d) image captured after manual adjustment.
Figure 6. Initial steps of the calibration process. (a) Initial robot arm position, (b) image captured w.r.t. initial position, (c) robot arm position after being manually adjusted, (d) image captured after manual adjustment.
Sensors 21 04071 g006
Figure 7. Six different robot positions for automatic hand–eye calibration. (a) 1st robot position, (b) image w.r.t. 1st position, (c) 2nd robot position, (d) image w.r.t. 2nd position, (e) 3rd robot position, (f) image w.r.t. 3rd position, (g) 4th robot position, (h) image w.r.t. 4th position, (i) 5th robot position, (j) image w.r.t. 5th position, (k) 6th robot position, (l) image w.r.t. 6th position.
Figure 7. Six different robot positions for automatic hand–eye calibration. (a) 1st robot position, (b) image w.r.t. 1st position, (c) 2nd robot position, (d) image w.r.t. 2nd position, (e) 3rd robot position, (f) image w.r.t. 3rd position, (g) 4th robot position, (h) image w.r.t. 4th position, (i) 5th robot position, (j) image w.r.t. 5th position, (k) 6th robot position, (l) image w.r.t. 6th position.
Sensors 21 04071 g007
Figure 8. Simulation results of the position correction system. The large black rectangle represents the entire object (computer server board); the blue rectangle is the artificial marker; the three black intermediate size rectangles represent the AOI capture position of the camera; the red and green lines represent the camera’s plane coordinates used to capture the AOI images and the camera coordinate axis, respectively.
Figure 8. Simulation results of the position correction system. The large black rectangle represents the entire object (computer server board); the blue rectangle is the artificial marker; the three black intermediate size rectangles represent the AOI capture position of the camera; the red and green lines represent the camera’s plane coordinates used to capture the AOI images and the camera coordinate axis, respectively.
Sensors 21 04071 g008
Figure 9. Robot-guided AOI inspection system.
Figure 9. Robot-guided AOI inspection system.
Sensors 21 04071 g009
Figure 10. Object positioning system’s target object with artificial markers. The red circle shows the artificial marker (square containing four crosses) used for positioning. The remaining three markers were used to analyze positional errors after camera shooting.
Figure 10. Object positioning system’s target object with artificial markers. The red circle shows the artificial marker (square containing four crosses) used for positioning. The remaining three markers were used to analyze positional errors after camera shooting.
Sensors 21 04071 g010
Figure 11. Camera shooting pose at the positioning artificial marker.
Figure 11. Camera shooting pose at the positioning artificial marker.
Sensors 21 04071 g011
Figure 12. Sample image of the positioning marker w.r.t. the robot pose shown in Figure 11.
Figure 12. Sample image of the positioning marker w.r.t. the robot pose shown in Figure 11.
Sensors 21 04071 g012
Figure 13. Four different robot poses to capture standard AOI targets images before manual displacement or rotation. Robot pose at the (a) 1st, (b) 2nd, (c) 3rd, and (d) 4th AOI target positions.
Figure 13. Four different robot poses to capture standard AOI targets images before manual displacement or rotation. Robot pose at the (a) 1st, (b) 2nd, (c) 3rd, and (d) 4th AOI target positions.
Sensors 21 04071 g013
Figure 14. Standard AOI images captured at the four target positions. (a) Standard image captured w.r.t Figure 13a, (b) Standard image captured w.r.t Figure 13b, (c) Standard image captured w.r.t Figure 13c, (d) Standard image captured w.r.t Figure 13d.
Figure 14. Standard AOI images captured at the four target positions. (a) Standard image captured w.r.t Figure 13a, (b) Standard image captured w.r.t Figure 13b, (c) Standard image captured w.r.t Figure 13c, (d) Standard image captured w.r.t Figure 13d.
Sensors 21 04071 g014
Figure 15. Two different object positions after random manual displacement and rotation to test the object correction system. (a) The first and (b) second random positions of the object.
Figure 15. Two different object positions after random manual displacement and rotation to test the object correction system. (a) The first and (b) second random positions of the object.
Sensors 21 04071 g015
Figure 16. Position correction system test results for two random positions. Standard image without displacement or rotation corresponding to (a) Figure 14a, and new images after (b) the first random position shown in Figure 15a and (c) the second random position shown in Figure 15b.
Figure 16. Position correction system test results for two random positions. Standard image without displacement or rotation corresponding to (a) Figure 14a, and new images after (b) the first random position shown in Figure 15a and (c) the second random position shown in Figure 15b.
Sensors 21 04071 g016
Figure 17. Position correction system test results for two random positions. Standard image without displacement or rotation corresponding to (a) Figure 14b, and new images after (b) the first random position shown in Figure 15a and (c) the second random position shown in Figure 15b.
Figure 17. Position correction system test results for two random positions. Standard image without displacement or rotation corresponding to (a) Figure 14b, and new images after (b) the first random position shown in Figure 15a and (c) the second random position shown in Figure 15b.
Sensors 21 04071 g017
Figure 18. Position correction system test results for two random positions. Standard image without displacement or rotation shown in (a) Figure 14c, and new images after (b) the first random position shown in Figure 15a and (c) the second random position shown in Figure 15b.
Figure 18. Position correction system test results for two random positions. Standard image without displacement or rotation shown in (a) Figure 14c, and new images after (b) the first random position shown in Figure 15a and (c) the second random position shown in Figure 15b.
Sensors 21 04071 g018
Figure 19. Position correction system test results for two random positions. Standard image without displacement or rotation shown in (a) Figure 14d, and new images after (b) the first random position shown in Figure 15a and (c) the second random position shown in Figure 15b.
Figure 19. Position correction system test results for two random positions. Standard image without displacement or rotation shown in (a) Figure 14d, and new images after (b) the first random position shown in Figure 15a and (c) the second random position shown in Figure 15b.
Sensors 21 04071 g019
Figure 20. The distance error between test 1 and test 2.
Figure 20. The distance error between test 1 and test 2.
Sensors 21 04071 g020
Figure 21. Distance error between test 1 and test 2.
Figure 21. Distance error between test 1 and test 2.
Sensors 21 04071 g021
Table 1. Comparison between existing vision-based position correction systems and the proposed system.
Table 1. Comparison between existing vision-based position correction systems and the proposed system.
FeaturesVision-Based Systems [31,32,33]Proposed System
Self-calibrationYesYes
Errors and failuresHighLow
Online position correction in a production lineNoYes
Setup complexity with OLP HighLow
Auto correct robot targetsNoYes
Table 2. The six different positions of the robot end-effector relative to the initial position.
Table 2. The six different positions of the robot end-effector relative to the initial position.
PositionXYZRX(α)RY(β)RZ(γ)
A000000
B0 + 30   cm + 10   cm + 30 ° 00
C0 30   cm 10   cm 30 ° 00
D 30   cm 000 + 30 ° 0
E + 30   cm 000 30 ° 0
F00000 + 30 °
Table 3. Hand–eye calibration results and error.
Table 3. Hand–eye calibration results and error.
PositionConnector Design SizeHand–Eye Calibration ResultsError
dX (mm)0−8.5598.559
dY (mm)0−0.2270.227
dZ (mm)190189.3871.613
Rx (degrees)0−0.0790.079
Ry (degrees)0−0.8680.868
Rz (degrees)0−0.8740.874
Table 4. Error analysis of the object position correction system (measured in pixels).
Table 4. Error analysis of the object position correction system (measured in pixels).
Error
(in Pixel)
First DisplacementSecond Displacement
XYNormXYNorm
AOI position 1
P11.1124.3924.41−7.29−18.9220.28
P20.2623.8923.89−4.26−17.4417.95
P30.3223.6923.69−4.52−13.7214.45
P41.4625.0225.06−7.50−13.8715.77
AOI position 2
P10.4419.1619.16−14.16−9.0217.04
P2−3.5218.9119.23−10.49−9.6014.22
P3−3.5014.0514.48−10.59−5.6211.99
P40.8714.3114.34−14.40−4.5915.11
AOI position 3
P1−25.5515.9329.830.64−26.5826.59
P2−31.6415.0935.060.01−27.1227.12
P3−32.668.4833.740.01−26.1326.13
P4−26.298.7627.710.56−25.0825.08
AOI position 4
P116.368.3418.36−25.674.6526.08
P212.3610.6116.28−21.251.2021.28
P310.165.7511.67−18.655.0319.31
P414.523.3614.90−22.788.9724.49
Table 5. Error analysis of object position correction system (in mm).
Table 5. Error analysis of object position correction system (in mm).
Error
(in mm)
First DisplacementSecond Displacement
XYNormXYNorm
AOI position 1
P10.224.794.80−1.43−3.723.98
P20.054.694.69−0.83−3.433.53
P30.064.654.65−0.89−2.702.84
P40.294.924.92−1.47−2.733.10
AOI position 2
P10.093.793.79−2.86−1.793.37
P2−0.703.743.80−2.08−1.902.81
P3−0.692.782.86−2.10−1.112.37
P40.172.832.84−2.85−0.912.99
AOI position 3
P1−4.572.755.330.11−4.764.76
P2−5.652.706.260.00−4.854.85
P3−5.841.526.030.00−4.684.68
P4−4.701.574.950.10−4.494.49
AOI position 4
P13.051.553.43−4.790.874.87
P22.311.983.04−3.960.223.97
P31.901.072.18−3.480.943.60
P42.710.632.78−4.251.674.57
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Bedaka, A.K.; Lee, S.-C.; Mahmoud, A.M.; Cheng, Y.-S.; Lin, C.-Y. A Camera-Based Position Correction System for Autonomous Production Line Inspection. Sensors 2021, 21, 4071. https://doi.org/10.3390/s21124071

AMA Style

Bedaka AK, Lee S-C, Mahmoud AM, Cheng Y-S, Lin C-Y. A Camera-Based Position Correction System for Autonomous Production Line Inspection. Sensors. 2021; 21(12):4071. https://doi.org/10.3390/s21124071

Chicago/Turabian Style

Bedaka, Amit Kumar, Shao-Chun Lee, Alaa M. Mahmoud, Yong-Sheng Cheng, and Chyi-Yeu Lin. 2021. "A Camera-Based Position Correction System for Autonomous Production Line Inspection" Sensors 21, no. 12: 4071. https://doi.org/10.3390/s21124071

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop