Camera Calibration in High-Speed Robotic Assembly Operations
Abstract
:Featured Application
Abstract
1. Introduction
- Section 1 presents the scientific context, the main goal of the research, and the importance of the studies aspects in the field of robotic manufacturing systems.
- Section 2 presents the state of the art in the field of camera calibration for robotic vision systems.
- Section 3 presents the materials and methods used for research, including the methods used, the experimental setup and procedure, and the implemented calculation algorithms.
- Section 4 presents the measurement results, and the calculation outputs, followed by a discussion that validates the experimental results.
- Section 5 presents the conclusions that highlight the main results and contributions of the research.
2. State of the Art
3. Materials and Methods
- Starting from the zero orientation, the camera was rotated with an arbitrary angle.
- The center of the image represents the origin of the image frame of reference. Using this coordinate system, a set of two points were chosen along the X axis and another set of two points was chosen along the Y axis—for the purpose of this research, these points were designated as measurement points. The points placed along the X axis were positioned symmetrically relative to the axis origin. The points placed along the Y axis were also in a relation of symmetry. The procedure was repeated until a total of four pairs of points were placed along the X axis, and another four pairs of points were placed along the Y axis. The points were placed along the axes considering the distance from the origin expressed in pixels, as indicated by the camera software. Thus, for the first pair, each point was placed at a distance of 20 pixels from the origin, one in the positive direction and one in the negative direction. For the second pair, the distance was 30 pixels. For the third pair, the distance was 40 pixels. For the fourth pair, the distance was 50 pixels. The pairs were placed in the same way along both the X and Y axis, as shown in Figure 4.
- Figure 5 shows all triangles in which the law of cosines was applied. C represents the camera focal point. The Xc and Yc axes represent the camera coordinate system projected onto the measurement plane. The Xm and Ym axes represent the coordinate system of the measurement plane. Both coordinate systems have the same origin O. Cx and Dx are the measurement points along the Xm axis, while Ax and Bx are the corresponding points along the Xc axis (the points in which the distances from C to the measurement points Cx and Dx intersect the Xm axis). Points Cy, Dy, Ay, and By have similar roles with respect to axes Ym and Yc.Figure 5. Camera calibration—first approach.
- The distance from the camera to each measurement point was determined using the camera software. The first set of measurement points (the two pairs positioned at a distance of 20 pixels from the origin along the X and Y axes) and the camera position relative to the measurement plane are illustrated in Figure 4. The Xc and Yc axes belong to the camera reference frame. The Xm and Ym axes belong to the measurement plane reference frame.
- This approach is based on the law of cosines in order to determine the camera rotation angles around the X and Y axes relative to the measurement plane. The law of cosines determines a triangle when all the sides or two sides and the included angle are known. The general formula has the following form, with respect to Figure 4, considering the COCy triangle:
- For the studied approach, the rotation around the X and Y axes can be considered separately. Thus, for the rotation around the X axis, the angle of rotation can be determined using the OCAy triangle, as shown in Figure 6.Figure 6. Calculus plane.
- Using this formula, the angle of rotation was determined for all pairs of points. The results were validated using the experimental setup described above by comparing the angle determined through the law of cosines with the actual rotation angle of the camera. Furthermore, the measurements were performed at three different heights chosen randomly.
- The initial orientation of the camera (“zero orientation”) was considered with the Z axis in the vertical position—parallel to the Z axis of robot’s base coordinate system.
- The goal of the study is to determine the camera position and orientation relative to the measurement plane. During a real robot application, the orientation of the camera is not known and must be determined to perform the calibration. However, to validate the calibration method proposed in this article, the camera was inclined at a pre-determined angle of 80o relative to the measurement plane around the Y axis.
- Using the image captured by the camera, a number of four rectangular measurement perimeters were considered as placed over the image. The measurement perimeters were concentric and centered on the origin of the image coordinate system. The dimensions of the measurement shapes were considered in pixels, as the building units on which the image structure is based. The dimensions of the measurement perimeters were 2 × 20 pixels, 2 × 30 pixels, 2 × 40 pixels, and 2 × 50 pixels. The perimeters placed over a captured image are illustrated in Figure 7. To compare both methods proposed in this paper, a single set of values was acquired using the camera’s proprietary software. In the figure, the camera software interface is shown. The image acquired by the camera is displayed in the central area of the interface. The measurement perimeters were represented upon the acquired image as red rectangles. Using a single set of points, both methods used the same input values.Figure 7. Measurement perimeters represented upon the acquired image.
- For each measurement perimeter, a number of nine measurement points were considered: the corners of the perimeter, the sides’ middle points, and the center of the rectangle. The distribution of the measurement points with respect to the image frame of reference is illustrated in Figure 8.Figure 8. The measurement points represented relative to the image frame of reference.
- The distance from the camera to each measurement point was determined using 3D data. The values were expressed with respect to the camera frame of reference
- The initial orientation of the camera was maintained along with all nine measurement points previously described.
- A new spherical coordinate system was considered, and all nine measuring points were defined using the newly created reference system. In this spherical reference system, the first two coordinates, Theta and Phi, are computed based on the point position and the last one R is the distance acquired by the 3D camera. The center of the spherical coordinate system is represented by the camera focal point. The correlation between the spherical and Cartesian coordinates is shown in Figure 9.Figure 9. Correlation between spherical and Cartesian coordinates.
- The Theta angle is computed using the point distribution relative to the camera sensor plane, as shown in Figure 10. As an example, for P1 the Theta angle is 45°.Figure 10. Point positioning in the camera sensor plane.
- The Phi angle was determined using the camera resolution and the maximum angle for each direction. For P4, P5, and P6, the angle is determined in the X plane, while P2, P5, and P8, are determined in the Y plane. As P4 and P6 are symmetrical in the camera plane, the two angles are equal and the angle for P5 is 0, as illustrated in Figure 11.Figure 11. Phi and R coordinates of the spherical systems for P4, P5, and P6.
- For P1, P3, P7, and P9, Phi angles were calculated considering OP5, a common leg (height) for all three right triangles (as seen in Figure 12) where angles P2OP5 and P4OP5 are the ones determined in the previous phase. The values for Theta and Phi angles are summarized in Table 1.Figure 12. Phi angle calculations for P1, P3, P7, and P9.
- Points P1 to P9 were all transformed from spherical coordinates into Cartesian coordinates in an XYZ space. In these conditions, the measuring plane is the plane that passes through all nine points. The least square method was used to estimate the coefficients of the fit plane. The fitting plane is defined by the following equation:Z = Ax + By + c
- The angles between the fitting plane and the XOY, XOZ, and YOZ planes are calculated. The angle is determined by using the dot product between the vector of the fitting plane (a, b, and −1) and the normal vector of each of the following planes: XOY, XOZ, and YOZ. The angle is given by the following formula:
4. Results and Discussion
- It is worth noting that the lowest error levels were obtained when using the measurement points placed at a 20 pixels distance from the origin.
- From Figure 5, CO and CAy are known from the data acquired by the camera. Also, OAy is known, being the fixed distance imposed between the respective measurement point and the origin O (in this case, corresponding to 20 pixels). These are the input values of the calculation.
- Using the law of cosines, the angle is computed with Equation (5). This represents the camera inclination angle.
- This algorithm is implemented in the Python program. The subroutine can be applied at any time during a task, by acquiring input data from any planar surface the work objects are placed onto. If a higher level of accuracy for the calculation is required, the camera can be moved within a 20 pixels distance from the measurement plane, since this height showed the lowest error levels—illustrated by Table 3.
- The method using the law of cosines is easier to implement and requires far more computational resources because it uses less data. In the case of the 20 pixels measurement perimeter, for example, the law of cosines approach uses four points, while the planar interpolation approach uses nine measurement points. This observation applies to all measurement perimeters and all measurement heights. Also, the calculation formula for the lay of cosines approach is simpler.
- However, since it uses fewer measurement points, the law of cosines approach is less accurate. The error levels calculated for the first approach, shown in Table 3, are ranging from 2.19% to 11.42%, with an average of 7.48%.
- It is worth noting that the law of cosines itself has some limitations, generating high level of errors for low camera rotation angles. For the most part, the higher level of errors is caused by rounding off the numbers used in calculations and by the uncertainties in the working environment.
- For this study, it was taken into account the fact that the robotic vision feature is used for acquiring data from working environments that are not deterministic. This means that these type of cameras will be used in a setup that has a high number of undetermined variables and deviations. In most cases, these types of applications could include parts with random positions and orientations, measurement surfaces with deviations from the planar shape, fixtures and supports with low accuracy levels (including 3D printed), and so on. The first approach showed that these variations have a higher influence on the results obtained, reflected through a higher error level.
- The method using the planar interpolation was used in order to mitigate the influence of the environmental variables on the camera angle calculation. The procedure is harder to implement and uses more data, thus requiring more computational resources. The calculation is more complex, requiring the employment of the least squares method and conversions between polar coordinates and Cartesian coordinates.
- The accuracy of the second approach proved to be higher. By using more measurement points, the error levels ranged from 0.25% and 2.57%, with an average of 1.02%, as shown in Table 4.
- Due to the specifics of the used mathematical algorithm, the planar interpolation approach can compensate for the errors in measurement data, including those generated by uncertainties in the working environment. This is reflected in a more accurate calculation, as shown above. Furthermore, a larger set of points can be used to flatten the results, thus lowering the overall error.
- The measurements were performed at different heights in order to observe the variations of the error levels at different distances between the camera and the measurement plane. The distance to the measurement points was determined at three different camera heights, chosen randomly.
- For the first approach, the average error level for the first height was 5.7%; for the second height, it was 8.54%, and for the third height, it was 8.2%.
- For the second approach, the average error level for the first height was 0.81%; for the second height, it was 1.3%, and for the third height, it was 0.96%.
- The measuring heights were inserted into the tables in ascending order. It can be observed in both cases that the lowest height yielded the lowest error levels.
- For the first approach, the average error level for the 20 pixels perimeter was 5.44%; for the 30 pixels perimeter, it was 6.59%, for the 40 pixels perimeter, it was 7.46%, and for the 50 pixels perimeter, it was 10.44%.
- For the second approach, the average error level for the 20 pixels perimeter was 1.88%; for the 30 pixels perimeter, it was 0.85%, for the 40 pixels perimeter, it was 0.97%, and for the 50 pixels perimeter, it was 0.39%.
- It can be observed that, as the points are measured on a wider perimeter, in the case of the planar interpolation approach, the error levels decrease. For the law of cosines approach, the error levels increase with the measurement perimeter, as the errors are cumulated due to the nature of the algorithm.
- For both calibration methods, one set of points is enough to determine the position and orientation of the camera relative to the measurement plane at a certain moment. However, as this algorithm was implemented in Python, the results are updated in real-time. Whenever necessary—that is, when the measurement plane changes—the calibration subroutine—the Python program—can be accessed to perform another calibration that will determine the position and orientation of the camera relative to the new measurement plane.
- The errors shown in the analysis (Table 3 and Table 4) are the errors that result when comparing the real camera inclination angle to the calculated angle using the calibration algorithm. It should be noted that these are not object measurement errors. The main purpose of camera calibration is to reduce measurement errors due to image deformations induced by perspective. While it is true that a more precise calibration yields more precise results when measuring objects, any level of calibration would ultimately reduce measurement errors by some amount. Thus, it must be taken into account that, while in some cases presented above the calibration errors may reach or exceed 10%, the measurement errors induced by these offsets would be much lower. The overall assembly accuracy is determined by the 2D camera used; the calibration process itself just improves the measuring process.
5. Conclusions
- By using two approaches for camera calibration, with different calculation methods, the results could be discussed through comparative analysis. This allowed to highlight the advantages and disadvantages of each method. While the law of cosines approach is simpler and easier to implement, without further refining, the high error levels make it unsuitable to be integrated in robotic vision applications.
- The average error of around 1% for the plane interpolation approach is acceptable for most robotic applications. At a wider angle, a 70° angle, for example, the error is around 0.7°.
- The planar interpolation approach has the advantage of an increased level of precision for wider spacing between the measurement points. Also, while there is a decrease in precision for greater camera heights, the error levels still remain around 1%.
- The described method does not require any calibration sheet or other dedicated materials, which makes it suitable for almost any workspace configuration. Using this procedure, the calibration can be performed in any position during operation, thus lowering the time required.
- Even though this calibration method sometimes has a lower accuracy, it is more suitable for some unstructured applications like those in which the position of the robot changes over time (i.e., robots mounted on mobile platforms, unstructured pick and place assembly operations), where other calibration methods, like the one implying a calibration pattern, are difficult to implement. This method could be implemented in real-time during robot operation, while other calibration methods are usually performed before robot operation.
- The Python code is easy to develop for the proposed calibration algorithm and it can be integrated with most acquisition software.
- By integrating the Python code with the acquisition software, the input data can be fed to the system in real-time, which means that the calibration can be performed while the application is running. Thus, the data acquisition software can adapt immediately to any changes in the working environment, and periodical calibration cycles are no longer necessary.
- The calibration method described in this study holds significant potential to enhance results in industrial robotic applications, such as pick and place and assembly, and in other fields employing vision systems.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Arents, J.; Greitans, M. Smart Industrial Robot Control Trends, Challenges and Opportunities within Manufacturing. Appl. Sci. 2022, 12, 937. [Google Scholar] [CrossRef]
- Javaid, M.; Haleem, A.; Singh, R.P.; Suman, R. Substantial capabilities of robotics in enhancing industry 4.0 implementation. Cogn. Robot. 2021, 1, 58–75. [Google Scholar] [CrossRef]
- Javaid, M.; Haleem, A.; Singh, R.P.; Rav, S.; Suman, R. Significance of sensors for industry 4.0: Roles, capabilities, and applications. Sens. Int. 2021, 2, 100110. [Google Scholar] [CrossRef]
- Evjemo, L.D.; Gjerstad, T.; Grøtli, E.I.; Sziebig, G. Trends in Smart Manufacturing: Role of Humans and Industrial Robots in Smart Factories. Curr. Robot. Rep. 2020, 1, 35–41. [Google Scholar] [CrossRef]
- Jia, F.; Ma, Y.; Ahmad, R. Review of current vision-based robotic machine-tending applications. Int. J. Adv. Manuf. Technol. 2024, 131, 1039–1057. [Google Scholar] [CrossRef]
- Khang, A.; Misra, A.; Abdullayev, V.; Litvinova, E. Machine Vision and Industrial Robotics in Manufacturing: Approaches, Technologies, and Applications; CRC Press: Boca Raton, FL, USA, 2024. [Google Scholar] [CrossRef]
- Niu, L.; Saarinen, M.; Tuokko, R.; Mattila, J. Integration of Multi-Camera Vision System for Automatic Robotic Assembly. Procedia Manuf. 2019, 37, 380–384. [Google Scholar] [CrossRef]
- Song, R.; Li, F.; Fu, T.; Zhao, J. A Robotic Automatic Assembly System Based on Vision. Appl. Sci. 2020, 10, 1157. [Google Scholar] [CrossRef]
- Karnik, N.; Bora, U.; Bhadri, K.; Kadambi, P.; Dhatrak, P. A comprehensive study on current and future trends towards the characteristics and enablers of industry 4.0. J. Ind. Inf. Integr. 2022, 27, 100294. [Google Scholar] [CrossRef]
- Goel, R.; Gupta, P. Robotics and Industry 4.0. In A Roadmap to Industry 4.0: Smart Production, Sharp Business and Sustainable Development; Advances in Science, Technology & Innovation; Springer: Cham, Switzerland, 2020. [Google Scholar] [CrossRef]
- Grau, A.; Indri, M.; Lo Bello, L.; Sauter, T. Robots in Industry: The Past, Present, and Future of a Growing Collaboration with Humans. IEEE Ind. Electron. Mag. 2021, 15, 50–61. [Google Scholar] [CrossRef]
- Grau, A.; Indri, M.; Lo Bello, L.; Sauter, T. Industrial robotics in factory automation: From the early stage to the Internet of Things. In Proceedings of the IECON 2017—43rd Annual Conference of the IEEE Industrial Electronics Society 2017, Beijing, China, 29 October–1 November 2017; pp. 6159–6164. [Google Scholar] [CrossRef]
- Avalle, G.; De Pace, F.; Fornaro, C.; Manuri, F.; Sanna, A. An augmented reality system to support fault visualization in industrial robotic tasks. IEEE Access 2019, 7, 132343–132359. [Google Scholar] [CrossRef]
- Matheson, E.; Minto, R.; Zampieri, E.G.G.; Faccio, M.; Rosati, G. Human–Robot Collaboration in Manufacturing Applications: A Review. Robotics 2019, 8, 100. [Google Scholar] [CrossRef]
- Wang, Z.; Fan, J.; Jing, F.; Deng, S.; Zheng, M.; Tan, M. An Efficient Calibration Method of Line Structured Light Vision Sensor in Robotic Eye-in-Hand System. IEEE Sens. J. 2020, 20, 6200–6208. [Google Scholar] [CrossRef]
- Enebuse, I.; Foo, M.; Ibrahim, B.S.K.K.; Ahmed, H.; Supmak, F.; Eyobu, O.S. A Comparative Review of Hand-Eye Calibration Techniques for Vision Guided Robots. IEEE Access 2021, 9, 113143–113155. [Google Scholar] [CrossRef]
- Qi, W.; Li, F.; Zhenzhong, L. Review on camera calibration. In Proceedings of the 2010 Chinese Control and Decision Conference, Xuzhou, China, 26–28 May 2010; pp. 3354–3358. [Google Scholar] [CrossRef]
- Gong, X.; Lv, Y.; Xu, X.; Jiang, Z.; Sun, Z. High-precision calibration of omnidirectional camera using an iterative method. IEEE Access 2019, 7, 152179–152186. [Google Scholar] [CrossRef]
- Lee, T.E.; Tremblay, J.; To, T.; Cheng, J.; Mosier, T.; Kroemer, O.; Fox, D.; Birchfield, S. Camera-to-Robot Pose Estimation from a Single Image. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 9426–9432. [Google Scholar] [CrossRef]
- Wang, X.; Chen, H.; Li, Y.; Huang, H. Online Extrinsic Parameter Calibration for Robotic Camera–Encoder System. IEEE Trans. Ind. Inform. 2019, 15, 4646–4655. [Google Scholar] [CrossRef]
- Zhang, Y.-J. 3-D Computer Vision: Principles, Algorithms and Applications, 1st ed.; Springer: Singapore, 2023. [Google Scholar] [CrossRef]
- Chang, W.-C.; Wu, C.-H. Eye-in-hand vision-based robotic bin-picking with active laser projection. Int. J. Adv. Manuf. Technol. 2016, 85, 2873–2885. [Google Scholar] [CrossRef]
- Roberti, A.; Piccinelli, N.; Meli, D.; Muradore, R.; Fiorini, P. Improving Rigid 3-D Calibration for Robotic Surgery. IEEE Trans. Med. Robot. Bionics 2020, 2, 69–573. [Google Scholar] [CrossRef]
- Ramírez-Hernández, L.R.; Rodríguez-Quiñonez, J.C.; Castro-Toscano, M.J.; Hernandez-Balbuena, D.; Flores-Fuentes, W.; Rascon-Carmona, R.; Lindner, L.; Sergiyenko, O. Improve three-dimensional point localization accuracy in stereo vision systems using a novel camera calibration method. Int. J. Adv. Robot. Syst. 2020, 17, 1729881419896717. [Google Scholar] [CrossRef]
- Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
- Jiang, T.; Cui, H.; Cheng, X. A calibration strategy for vision-guided robot assembly system of large cabin. Measurement 2020, 163, 107991. [Google Scholar] [CrossRef]
- Ma, X.; Zhu, P.; Li, X.; Zheng, X.; Zhou, J.; Wang, X.; Wai, K.; Au, S. A Minimal Set of Parameters-Based Depth-Dependent Distortion Model and Its Calibration Method for Stereo Vision Systems. IEEE Trans. Instrum. Meas. 2024, 73, 7004111. [Google Scholar] [CrossRef]
- Yi, H.; Song, K.; Song, X. Watermelon Detection and Localization Technology Based on GTR-Net and Binocular Vision. IEEE Sens. J. 2024, 24, 19873–19881. [Google Scholar] [CrossRef]
- Guo, R.; Cui, H.; Deng, Y.; Yang, R.; Jiang, T. An Accurate Volumetric Error Modeling Method for a Stereo Vision System Based on Error Decoupling. IEEE Trans. Instrum. Meas. 2024, 73, 5020112. [Google Scholar] [CrossRef]
- Gunady, I.E.; Ding, L.; Singh, D.; Alfaro, B.; Hultmark, M.; Smiths, A.J. A non-intrusive volumetric camera calibration system. Meas. Sci. Technol. 2024, 35, 105901. [Google Scholar] [CrossRef]
- Peng, G.; Ren, Z.; Gao, Q.; Fan, Z. Reprojection Error Analysis and Algorithm Optimization of Hand–Eye Calibration for Manipulator System. Sensors 2024, 24, 113. [Google Scholar] [CrossRef]
- Zeng, R.; Zhao, Y.; Chen, Y. Camera calibration using the dual double-contact property of circles. J. Opt. Soc. Am. A 2023, 40, 2084–2095. [Google Scholar] [CrossRef]
- Chen, L.; Zhong, G.; Wan, Z.; Han, Z.; Liang, X.; Pan, H. A novel binocular vision-robot hand-eye calibration method using dual nonlinear optimization and sample screening. Mechatronics 2023, 96, 103083. [Google Scholar] [CrossRef]
Point | Theta [°] | Phi [°] |
---|---|---|
P1 | 45 | 7.32332 |
P2 | 90 | 5.02232 |
P3 | 135 | 7.32332 |
P4 | 0 | 5.35714 |
P5 | 0 | 0 |
P6 | 180 | 5.35714 |
P7 | 315 | 7.32332 |
P8 | 270 | 5.02232 |
P9 | 225 | 7.32332 |
Data Set | P9 | P8 | P7 | ||||||
---|---|---|---|---|---|---|---|---|---|
P6 | P5 | P4 | |||||||
P3 | P2 | P1 | |||||||
H1 20 px | −49 | −50 | 450 | 0 | −40 | 452 | 49 | −50 | 447 |
−51 | −1 | 463 | 0 | −1 | 456 | 51 | −1 | 457 | |
−51 | 51 | 467 | 0 | 41 | 465 | 51 | 50 | 463 | |
H1 30 px | −73 | −74 | 446 | 0 | −74 | 445 | 73 | −74 | 444 |
−75 | −1 | 456 | 0 | −1 | 457 | 74 | −1 | 450 | |
−77 | 77 | 469 | 0 | 77 | 471 | 77 | 76 | 467 | |
H1 40 px | −96 | −97 | 438 | 0 | −98 | 441 | 96 | −96 | 435 |
−101 | −1 | 458 | 0 | −1 | 456 | 99 | −1 | 451 | |
−102 | 102 | 468 | 0 | 104 | 475 | 103 | 102 | 469 | |
H1 50 px | −119 | −120 | 436 | 0 | −120 | 435 | 118 | −119 | 432 |
−126 | −1 | 458 | 0 | −1 | 456 | 125 | −1 | 453 | |
−129 | 128 | 471 | 0 | 130 | 474 | 128 | 127 | 468 | |
H2 20 px | −63 | −64 | 575 | 0 | −60 | 574 | 63 | −64 | 572 |
−64 | 1 | 586 | 0 | 1 | 587 | 64 | −1 | 583 | |
−66 | 65 | 598 | 0 | 61 | 596 | 66 | 65 | 594 | |
H2 30 px | −94 | −95 | 569 | 0 | −95 | 568 | 94 | −94 | 566 |
−96 | −1 | 585 | 0 | −1 | 587 | 96 | −1 | 578 | |
−99 | 98 | 599 | 0 | 99 | 601 | 99 | 98 | 597 | |
H2 40 px | −123 | −124 | 561 | 0 | −125 | 564 | 122 | −123 | 557 |
−129 | −1 | 585 | 0 | −1 | 587 | 128 | −1 | 580 | |
−131 | 131 | 600 | 0 | 133 | 607 | 132 | 131 | 600 | |
H2 50 px | −151 | −152 | 554 | 0 | −153 | 555 | 150 | −151 | 549 |
−160 | −1 | 584 | 0 | −1 | 586 | 159 | −1 | 577 | |
−164 | 164 | 602 | 0 | 166 | 607 | 165 | 164 | 601 | |
H3 20 px | −66 | −67 | 597 | 0 | −62 | 596 | 66 | −66 | 593 |
−67 | −1 | 609 | 0 | −1 | 610 | 67 | −1 | 606 | |
−68 | 68 | 621 | 0 | 61 | 620 | 68 | 67 | 616 | |
H3 30 px | −98 | −99 | 593 | 0 | −98 | 590 | 97 | −98 | 589 |
−100 | −1 | 607 | 0 | −1 | 609 | 100 | −1 | 602 | |
−102 | 102 | 621 | 0 | 102 | 623 | 102 | 101 | 619 | |
H3 40 px | −128 | −129 | 584 | 0 | −130 | 586 | 127 | −128 | 579 |
−134 | −1 | 607 | 0 | −1 | 609 | 132 | −1 | 601 | |
−137 | 136 | 623 | 0 | 138 | 630 | 137 | 136 | 622 | |
H3 50 px | −158 | −159 | 577 | 0 | −160 | 579 | 157 | −157 | 572 |
−167 | −1 | 607 | 0 | −1 | 609 | 165 | −1 | 600 | |
−170 | 170 | 624 | 0 | 172 | 628 | 171 | 170 | 623 |
Height | Px | Angle [°] | Error |
---|---|---|---|
1 | 20 | 81.75414348 | 2.19% |
30 | 76.05467223 | 4.93% | |
40 | 75.04858306 | 6.19% | |
50 | 72.38921295 | 9.51% | |
2 | 20 | 74.61077611 | 6.74% |
30 | 73.87802451 | 7.65% | |
40 | 73.32941035 | 8.34% | |
50 | 70.86462849 | 11.42% | |
3 | 20 | 74.0954911 | 7.38% |
30 | 74.25758952 | 7.18% | |
40 | 73.72102328 | 7.85% | |
50 | 71.68477916 | 10.39% |
Height | Px | Angle [°] | Error |
---|---|---|---|
1 | 20 | 79.3466 | 0.82% |
30 | 79.2076 | 0.99% | |
40 | 79.0834 | 1.15% | |
50 | 80.2278 | 0.28% | |
2 | 20 | 78.2116 | 2.24% |
30 | 79.0063 | 1.24% | |
40 | 79.1527 | 1.06% | |
50 | 79.4779 | 0.65% | |
3 | 20 | 77.9434 | 2.57% |
30 | 79.7438 | 0.32% | |
40 | 79.4375 | 0.70% | |
50 | 80.1961 | 0.25% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Parpală, R.C.; Ivan, M.A.; Parpală, L.F.; Coteț, C.E.; Popa, C.L. Camera Calibration in High-Speed Robotic Assembly Operations. Appl. Sci. 2024, 14, 8687. https://doi.org/10.3390/app14198687
Parpală RC, Ivan MA, Parpală LF, Coteț CE, Popa CL. Camera Calibration in High-Speed Robotic Assembly Operations. Applied Sciences. 2024; 14(19):8687. https://doi.org/10.3390/app14198687
Chicago/Turabian StyleParpală, Radu Constantin, Mario Andrei Ivan, Lidia Florentina Parpală, Costel Emil Coteț, and Cicerone Laurențiu Popa. 2024. "Camera Calibration in High-Speed Robotic Assembly Operations" Applied Sciences 14, no. 19: 8687. https://doi.org/10.3390/app14198687
APA StyleParpală, R. C., Ivan, M. A., Parpală, L. F., Coteț, C. E., & Popa, C. L. (2024). Camera Calibration in High-Speed Robotic Assembly Operations. Applied Sciences, 14(19), 8687. https://doi.org/10.3390/app14198687