Next Article in Journal
Microstructure Design for Fast Lifetime Measurements of Magnetic Tunneling Junctions
Next Article in Special Issue
High-Index Glass Ball Retroreflectors for Measuring Lateral Positions
Previous Article in Journal
OVLI-TA: An Unmanned Aerial System for Measuring Profiles and Turbulence in the Atmospheric Boundary Layer
Previous Article in Special Issue
Coil Inductance Model Based Solenoid on–off Valve Spool Displacement Sensing via Laser Calibration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Visual Calibration for Multiview Laser Doppler Speed Sensing

1
Department of Creative Informatics, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan
2
Department of Information and Communications Engineering, School of Engineering, Tokyo Institute of Technology, 4259-G2-31, Nagatsuta, Midori-ku, Yokohama, Kanagawa 226-8502, Japan
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(3), 582; https://doi.org/10.3390/s19030582
Submission received: 24 December 2018 / Revised: 24 January 2019 / Accepted: 25 January 2019 / Published: 30 January 2019
(This article belongs to the Special Issue Laser Sensors for Displacement, Distance and Position)

Abstract

:
We present a novel calibration method for a multi-view laser Doppler speed sensing (MLDSS) system. In contrast with the traditional method where only the laser geometry is independently calibrated, the proposed method simultaneously optimizes all the laser parameters and directly associates the parameters with a motion sensing model. By jointly considering the consistency among laser Doppler velocimetry, the laser geometry and a visual marker tracking system, the proposed calibration method further boosts the accuracy of MLDSS. We analyzed the factors influencing the precision, and quantitatively evaluated the efficiency of the proposed method on several data sets.

1. Introduction

Multiview laser Doppler speed sensing (MLDSS) [1] uses several one-dimensional speed measurements to recover the six-degree-of-freedom (6-DOF) motion of an arbitrary rigid body. Such a system has unique advantages, such as the capability to measure untextured and unstructured targets, ultrahigh speed, and ultralow computational cost. An inertial measurement unit [2,3] may have similar characteristics, but would be intrusive since it must be attached to the target. On the other hand, conventional contactless 6-DOF motion-sensing techniques, including computer-vision approaches, usually rely on the target structure [4,5] and texture [6,7], making it difficult to directly reapply them when the scenario changes. In this sense, MLDSS is potentially a more general solution for most motion-sensing tasks, e.g., industrial inspection, user interface [8], and autonomous driving.
It is important to mention that the precision of MLDSS strongly relies on the sensing accuracy of the laser Doppler velocimeter (LDV), the system-layout parameters, and calibration precision. Factors related to the LDV sensing, such as speckle noise [9,10] and laser source drift [11,12] have been well discussed. Hu et al. stated the essential system-layout conditions, and proposed a simple method to calibrate the system parameters [1]. In their approach, each laser was separately calibrated by linearly fitting of the camera-collected 3D laser spot co-ordinates.
A similar calibration principle can be commonly found in the literature for the calibration of galvanoscopic laser systems using various types of sensors [13,14,15]. It makes sense and achieves relatively good accuracy, but it has to be noted that LDV measurements and the system layout are not involved in the process. Hence, we refer to it as “geometric-only calibration”. Although measurement precision is rather sensitive to the accuracy of the calibration parameters, geometric-only calibration minimizes only the error in image-based measurements, but ignores correlation with the motion-reconstruction process.
The inspiration for our approach is derived from the calibration techniques of optical systems and computer vision, in which simultaneously calibrating all elements is rather beneficial in terms of both simplicity and accuracy [16,17]. It is also important to incorporate a system measurement model into the geometrical calibration, as implied in References [18,19]. Based on this idea, we present a novel calibration technique for MLDSS. This is done with a maximum-likelihood estimate (MLE) for the system parameters given the measurements of the LDV, and the motion ground truth provided by an additional 3D tracking vision system. Instead of directly calculating the laser geometry, in this method we sought the best group of laser parameters that minimized motion-estimation error. We validated the proposed method on several different datasets, and an obvious improvement in accuracy was confirmed.

2. MLDSS Parameters

In this section, we briefly describe the basic kinematic formulation in MLDSS [1] and parameterize it for the calibration.

Basic Equations

We discuss a right-handed co-ordinate system with a fixed origin O . A rigid body has a 6-DOF speed denoted by X = [ ω T , v T ] T , with angular components ω = [ ω x , ω y , ω z ] T and linear components v = [ v x , v y , v z ] T .
MLDSS utilizes a simple kinematic model to reconstruct the speed of a target rigid body. When a laser-beam ray hits the target, the one-dimensional velocity along this ray is measured by the Doppler effect. Let the i t h laser be denoted by L i : o i , l i , where o i is one point on L i , and l i is the direction unit vector. Velocity measurement could then be formulated as:
v i = o i × l i · ω + l i · v
By combining multiple measurements with the form in Equation (1), the 6-DOF speed could be linearly reconstructed from one-dimensional velocity measurements v i . The minimum MLDSS implementation with six lasers, as shown in Figure 1, is written as:
( o 1 × l 1 ) T l 1 T ( o 2 × l 2 ) T l 2 T ( o 6 × l 6 ) T l 6 T ω v = v 1 v 2 v 6
We use A to denote the leftmost matrix, and b to denote the rightmost vector in Equation (2). Matrix A fully describes the system, but the dimension of the parameter space is smaller than the number of matrix elements. Consider the i t h row of A, ( A ) i . It represents the measurement process of laser L i , determined by o i and l i . Note that o i is a point on a straight line, and l i is a unit vector. Both o i and l i have only two DOFs, so that ( A ) i has four DOFs. Under the optical system layout and the specified co-ordinate system illustrated in Figure 1, we let o i = [ o i x , o i y , 0 ] T and l i = [ cos β i cos α i , sin β i cos α i , sin α i ] , without loss of generality. Then, the parameters of the system could be denoted by
P = { o i x , o i y , α i , β i | i = 1 , 2 6 }
where P consists of 24 elements.

3. MLDSS Calibration

3.1. Geometric-Only Calibration

Note that P consists of only the geometrical parameters of the lasers. We can directly calculate each laser’s spatial position, i.e., o i , l i , in the calibration.
We first briefly describe this geometric-only calibration method [1,14,15]. As illustrated in Figure 2, a planar board with printed pattern such as a chessboard is used as the calibration object. When the laser hits the object, both the pattern and the laser spot are captured by a camera. With no loss of generality, we assume the camera is already calibrated. The position of the pattern can be calculated from the image [20], and the 3D co-ordinates of the laser spot can be computed by casting a ray from the camera to the pattern plane. This procedure is repeated, and the laser parameters are acquired by linear fitting of multiple 3D point co-ordinates. Note that a similar principle can be used with a different sensor, such as a depth camera [13].
Apparently, this method has geometrical errors from sources such as the image processing error and lens distortion. Here, we go one step farther to see the error of motion reconstruction. Consider Equation (2). We can split matrix A into a ground-truth matrix A g and a distortion matrix A δ . The motion reconstruction error related to the calibration error is then denoted by δ X A g 1 A δ X . Comparing with the error introduced by the LDV measurement denoted by δ X = A 1 δ b , the calibration error introduces an unbounded error that linearly increases when target speed X becomes larger. Hence, in the case of sensing large speed, calibration accuracy could be much more important than LDV measurement accuracy. However, the geometric-only methods only minimize the geometric error (e.g., reprojection error), so that the motion-reconstruction error is prone to larger target speed and systematic errors of the calibration system.
The intuition here, as we claimed in Section 1, is that all laser parameters should be jointly optimized, and motion-reconstruction error should be minimized instead of laser geometry error, so that bounded motion-reconstruction accuracy could be expected.

3.2. Statistical Calibration by Minimizing Motion-Reconstruction Error

Hence, in this paper, the calibration of MLDSS is described as how to find the best set of parameters P that generate the most accurate 6-DOF speed estimation X ˜ under the current system layout, given LDV measurements b .
To simplify the discussion, we divided motion sensing into discrete frames. First, assume that we know the ground-truth position of the target object at each frame k. It is denoted by ξ k = [ r k T , t k T ] T in the space of se ( 3 ) Lie algebra, where r k = [ r k x , r k y , r k z ] T is the orientation and t k = [ t k x , t k y , t k z ] T is the location. Simultaneously, the MLDSS system measures velocities b k = [ v k 1 , v k 2 , v k 3 , v k 4 , v k 5 , v k 6 ] T , respectively from each laser.
Note that long-term speed integration would inevitably introduce larger and time-dependent error, a short-time interval may result in undesired peaks in the estimated speed, and interframe subtraction helps to cancel systematic errors in a 3D tracking system. We formulate the problem with interframe motion increments. The pose change between frame k and k 1 is thus represented by δ ξ ( k ) = [ δ r k T , δ t k T ] , where
δ r k = r k r k 1 δ t k = t k e x p ( δ r k 1 ) t k 1
e x p ( · ) denotes the conversion from a so ( 3 ) rotation vector to the corresponding SO ( 3 ) rotation matrix, known as the Rodrigues transform [21].
Knowing that matrix A can be uniquely determined by parameter set P, the 6-DOF speed of the target measured by the MLDSS system at frame k could be calculated by X ˜ k ( P ) = [ A ( P ) ] 1 b k . Let δ t k = t k t k 1 . The interframe pose-change estimation of the MLDSS system is thus written as
δ ξ ˜ k ( P ) = X ˜ k ( P ) δ t k
We use e k ( P ) = δ ξ k δ ξ ˜ k ( P ) to denote the error of one frame. With the assumption that δ ξ ˜ k ( P ) is locally linear, and the LDV measurement error follows a Gaussian distribution, the MLE of the MLDSS system is the solution of the following nonlinear least-square problem:
P o p t = arg min P k ( e k T ( P ) e k ( P ) ) )
We solve Equation (6) using the Levenberg–Marquardt algorithm [22]. It requires an initial guess of P, which can be provided by the geometric-only method as introduced in Section 3.1.

3.3. System Setup and Nonideal Factors

It has to be mentioned that the description in Section 3.2 omitted some details in order to clearly state the proposed method. In this subsection, we describe the practical system for calibration and reformulate the complete calibration problem.
Figure 3 illustrates the system for calibration. It consists of the MLDSS system to be calibrated [1], a marker-based 3D tracking system composed of three IR cameras (Flex 3, Natural Point, Inc., Corvallis, OR, USA), and a monochrome camera (MQ003MG-CM, XIMEA). The 3D tracking system uses active infrared illumination and the stereo method to track the 3D location of retroreflective markers. Four or more asymmetrically placed markers on a rigid body can be tracked, with the 6-DOF pose denoted by Lie Algebra ξ . This pose is taken as the ground truth in our discussion, since most of the systematic error is cancelled by interframe subtraction.
The four cameras were calibrated [23,24] beforehand. The IR camera’s trigger signal was inverted each time the LDV measurement was sampled, so that the frame rate of the MLDSS was roughly twice that of the cameras. Throughout the discussion in this paper, the MLDSS system ran at about 200 Hz, and the tracking system ran at 100 Hz. All devices were controlled by a desktop workstation (Dell T7910 workstation with Intel E5-2687W v4 @3GHz). Note that such a high-spec computer is not necessary for this calibration. Both motion reconstruction and optimization in this paper are computationally friendly and can run on common tabletop computers.
We address the nonideal factors in this practical calibration system as follows.
Temporal misalignment. As we described above, the 3D tracking system runs slower than the MLDSS system so that they are temporally misaligned. Using f to denote the frame index of MLDSS, and t f to denote the corresponding time, we align the MLDSS measurements to the 3D tracking system. Equation (5) is thus reformulated by linearized integration and interpolation:
δ ξ ˜ k ( P ) = f X ˜ f ( P ) δ t f
where
δ t f = m i n ( t f t k 1 , t f t f 1 , t k t f ) t k 1 t f t k
Note that although the two systems are synchronized by hardware connection, unintended temporal misalignment might still exist due to the sampling time of the LDV and the exposure time of the cameras. This misalignment causes an error in e k ( P ) and is independent of parameters P that finally influence calibration precision. To solve it, we modeled the temporal misalignment as a fixed offset parameter t o . Equation (8) is thus rewritten as
δ t f = m i n ( t f t k 1 + t o , t f t f 1 , t k t f t o ) t k 1 t o t f t k t o
Mechanical velocity offset. A galvanometer scanner is used for controlling laser direction. While we assumed the scanner fully stopped when sampling the LDV measurement, the scanner could possibly slightly oscillate and cause error in the speed. Because the control sequence of the galvanometer is fixed and the optical path of each laser is different, this offset can be regarded as systematic to each laser. We thus refined speed reconstruction as
X ˜ f ( P ) = [ A ( P ) ] 1 ( b f + b o )
where b o is the vector composed of the velocity offset for each laser due to the corresponding mechanical oscillation.
Weighing error terms. In solving the multiobject optimization problem denoted by Equation (6), it is important to balance the influence of each error term and data samples in order to not let partial data be dominant. Here, we introduce a diagonal weight matrix W. Error term E k for each sample k is thus:
E k ( P ) = e k T ( P ) W e k ( P )
Considering the difference in the unit and the distance from the calibration object to the origin of the system, [ 1 , 1 , 1 , 0.005 , 0.005 , 0.005 ] is ad hoc determined as the diagonal elements of W in our experiment, but note that this weight should be adjusted with a change in the co-ordinate system.
Outlier Measurements. Speckle noise [10] is one of the main noise sources of the LDV. It causes undesired short-term peaks in velocity measurements. Such noise can largely be relieved in the low-frequency domain with the use of a tracking filter [1,9]. However, considering signal intensity is low due to potential loss in the laser focus, there occasionally is speckle noise in measurements. In order to reduce its influence on calibration accuracy, a modified Huber kernel [25] is introduced to the problem:
E k ( P ) = e k T ( P ) W e k ( P ) 2 δ if e k T ( P ) W e k ( P ) < δ 2 e k T ( P ) W e k ( P ) δ 2 otherwise .
where outlier errors grow linearly instead of quadratically.
Complete maximum likelihood estimation. With the above consideration, we can naturally extend Equation (6) by estimating the complete set of parameters:
P o p t = arg min P , t o , b o k ( E k ( P , t o , b o ) )
where E k ( P , t o , b o ) is defined by injecting Equations (7), (9), (10) into Equation (12). Equation (13) is solved with the Levenberg–Marquadt algorithm [22] as well. The initial guess of P is provided by geometric-only calibration. t o and b o are simply set to zero.

3.4. Summary

In this section, the calibration of MLDSS with a pratical system is described and formulated. The proposed method states the calibration problem with minimization of the motion error, which is essentially different from conventional geometrical methods [1,14,15]. The procedures of the proposed calibration are summarized as follows:
  • Make a calibration object with four or more asymmetric placed markers;
  • register the calibration object in the 3D tracking system as a trackable rigid body;
  • simultaneously capture target motion and speed with the MLDSS and the 3D tracking system, and build a dataset following the instructions in Section 4;
  • estimate initial parameters using the geometric-only method introduced in Section 3.1 (optional); and
  • refine all parameters by solving Equation (13).

4. Data Collection

It is straightforward to solve Equation (13) with the collected motion data. However, inappropriate data selection would provide insufficient information, and finally result in an ill-conditioned optimization problem. We study the data collection for our method in this section.
Check the derivatives of Equation (2). Jacobian J P = X / P describes the gradient direction in the parameter space of one data sample. Let the parameter of laser L i be denoted by P i , and the i t h column of A 1 be [ A 1 ] i . Knowing that A is a function of P, we write the partial Jacobian J i as:
J i = X P i = [ A 1 ] i X T G i
where G i is given by Equation (15).
G i = ( A ) i P i = 0 sin α i o i y cos α i 0 sin α i 0 o i x cos α i 0 sin β i cos α i cos β i cos α i o i y cos β i sin α i o i x sin β i sin α i o i x cos β i cos α i + o i y sin β i cos α i 0 0 cos β i sin α i sin β i sin α i 0 0 sin β i sin α i cos β i sin α i 0 0 cos α i 0
J P is the stack of J i for i = 1 , 2 , , 6 . It can be observed that some submatrices of J P have only zero elements. It means that a certain kind of motion error would provide no additional constraints for some parameters. Especially from Equation (14), error in the translation components does not provide any constraint for position parameters o i x and o i y , and the error of r x , r y provides no constraint for direction angle β i .
More intuition is provided by Table 1, where we simulated the norms of the submatrices of J P using real calibration parameters. S u b t l e , R o t , and T r a n s denote the motion patterns of subtle speed X = [ 0.1 , 0.1 , 0.1 , 0.1 , 0.1 , 0.1 ] T , a rotational speed where ω x = π r a d / s , and a translational speed where v x = 100 m m / s , respectively. a and o denote the set of direction parameters { α i , β i } and position parameters { o i x , o i y } for all lasers.
Table 1 demonstrates how the motion pattern influences gradient direction. A large translational motion means a larger gradient in the direction parameters a and a smaller gradient in position parameters o . Jointly consider this with Equation (14). Translational motion error in one direction cannot fully condition the parameters, but the absolute value of the gradient could be much larger than the rotational motion. This might finally result in overfitting to a partial dataset, while the other part of the calibration data contributes little to the optimization.
Two implications could be derived to help such a situation. First, contribution to the optimization problem from different samples should be balanced considering the corresponding gradient. This is done by reweighing error terms as we introduced in Section 3.3. Second, multiple motion patterns should be included and evenly distributed in the dataset in order to avoid degeneration configuration.
Based on the above discussion, we prepared the dataset for solving Equation (13) with a calibration object as illustrated in Figure 4a In the dataset, the motion of the calibration object was evenly distributed into six different patterns as illustrated in Figure 4b. During data collection, the calibration object was manually moved roughly 1 m away from the measurement systems, as shown in Figure 3.

5. Evaluation

5.1. Cross-Validation

We validated the proposed method with a real dataset, consisting of 4500 sample frames for the calibration, and 1500 frames for the test. Both the calibration set and the test set consisted of six evenly distributed back-and-forward motion patterns, as shown in Figure 4b. This data amount far exceeds the necessary amount for the calibration, but it helps to reduce data noise. Note that the proposed system runs in high speed. Specifically, the marker tracking system runs at 100 Hz, and the MLDSS runs at 200 Hz. Hence, it takes no more than 10 min to collect all the data and finish optimization.
We compared the proposed method with the geometric-only calibration method [1], where the camera-calibration reprojection error was below 0.081 pixel, and the 3D point-to-line root mean square (RMS) distance in the linear fitting was 0.734 mm (100 points for each laser). Note this error is already smaller than that of the state-of-art geometric-only method [13] thanks to the use of a long-focus camera lens. Since in the MLDSS scenario we care most about motion reconstruction, while geometrical error is of minor importance, we compared the correctness of the reconstructed-motion increments in the test set.
Figure 5 illustrates the six motion components in the reconstruction results of all the samples in the test set. We qualitatively confirmed that motion reconstruction was improved with the proposed method in this figure. While the geometric-only method achieved relatively good accuracy in major motion components, such as in the translational motion, the capability of measuring subtle and rotational motions degraded. With Equation (14), we consider the reason for this as the error in the wrong estimation of positional parameters o i x and o i y , and a joint effect of geometrical errors of multiple lasers. In contrast, the proposed method achieved better accuracy in almost all motion components, as shown in the magnified parts of the figure.
To quantitatively demonstrate the results, RMS error between the motion-reconstruction result and the ground truth is shown in Table 2. We divided the test data into six parts by major motion pattern, as demonstrated in Figure 4b. From Table 2, it is shown that the proposed method achieved smaller motion-reconstruction error in almost all motion patterns. The RMS errors of both the rotational and translational components across the whole dataset were reduced by more than 50%, which we consider significant improvement of accuracy.

5.2. Sensing Daily Object

To show that the proposed method is also valid for the speed estimation of daily objects besides the calibration board, we briefly tested the new parameters on measuring a rotating globe as shown in Figure 6a. Ground-truth motion was also measured by the 3D marker tracking system. Figure 6b shows the result. The proposed method achieved a more accurate result than the geometric-only method. Rotational RMS error was 0.0245 rad with the proposed calibration parameters, while that of the conventional method was 0.0388 rad.

6. Conclusions

In this paper, we described a new calibration method for MLDSS. In contrast with the conventional method where only laser geometry is independently calibrated from images, the proposed method simultaneously optimizes all laser parameters and directly associates them with the motion-sensing model. We discussed in detail the system model and factors related to calibration accuracy, and proposed a maximum-likelihood estimation-based method for solving a calibration problem. Finally, higher accuracy was confirmed by cross-validation from qualitative and quantitative evaluation on a 1500 sample test set compared with the state-of-art method.

Author Contributions

Conceptualization, Y.H. and L.M.; data curation, Y.H.; formal analysis, Y.H.; funding acquisition, Y.W.; investigation, Y.H.; methodology, Y.H.; project administration, L.M. and Y.W.; software, Y.H.; supervision, M.I.; validation, Y.H.; visualization, Y.H.; writing—original draft, Y.H.; writing—review and editing, Y.H., L.M., and Y.W.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hu, Y.; Miyashita, L.; Watanabe, Y.; Ishikawa, M. Robust 6-DOF motion sensing for an arbitrary rigid body by multi-view laser Doppler measurements. Opt. Express 2017, 25, 30371–30387. [Google Scholar] [CrossRef] [PubMed]
  2. Won, S.P.; Golnaraghi, F.; Melek, W.W. A Fastening Tool Tracking System Using an IMU and a Position Sensor With Kalman Filters and a Fuzzy Expert System. IEEE Trans. Ind. Electron. 2009, 56, 1782–1792. [Google Scholar] [CrossRef]
  3. Yi, C.; Ma, J.; Guo, H.; Han, J.; Gao, H.; Jiang, F.; Yang, C. Estimating Three-Dimensional Body Orientation Based on an Improved Complementary Filter for Human Motion Tracking. Sensors 2018, 18, 3765. [Google Scholar] [CrossRef] [PubMed]
  4. Besl, P.J.; McKay, N.D. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
  5. Chetverikov, D.; Svirko, D.; Stepanov, D.; Krsek, P. The Trimmed Iterative Closest Point algorithm. In Proceedings of the 16th International Conference on Pattern Recognition, Quebec City, QC, Canada, 11–15 August 2002; Volume 3, pp. 545–548. [Google Scholar] [CrossRef]
  6. Garon, M.; Lalonde, J. Deep 6-DOF Tracking. IEEE Trans. Vis. Comput. Gr. 2017, 23, 2410–2418. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Engel, J.; Koltun, V.; Cremers, D. Direct Sparse Odometry. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 611–625. [Google Scholar] [CrossRef] [PubMed]
  8. Hu, Y.; Miyashita, L.; Watanabe, Y.; Ishikawa, M. GLATUI: Non-intrusive Augmentation of Motion-based Interactions Using a GLDV. In Proceedings of the Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (CHI E ’18), Montreal, QC, Canada, 21–26 April 2018; ACM: New York, NY, USA, 2018; pp. 1–6. [Google Scholar] [CrossRef]
  9. Vass, J.; Šmíd, R.; Randall, R.; Sovka, P.; Cristalli, C.; Torcianti, B. Avoidance of speckle noise in laser vibrometry by the use of kurtosis ratio: Application to mechanical fault diagnostics. Mech. Syst. Signal Process. 2008, 22, 647–671. [Google Scholar] [CrossRef]
  10. Rothberg, S. Numerical simulation of speckle noise in laser vibrometry. Appl. Opt. 2006, 45, 4523–4533. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Chang, Y.H.; Liu, C.S.; Cheng, C.C. Design and Characterisation of a Fast Steering Mirror Compensation System Based on Double Porro Prisms by a Screw-Ray Tracing Method. Sensors 2018, 18, 4046. [Google Scholar] [CrossRef] [PubMed]
  12. Liu, C.-S.; Lin, K.-W. Numerical and experimental characterization of reducing geometrical fluctuations of laser beam based on rotating optical diffuser. Opt. Eng. 2014, 53, 122408. [Google Scholar] [CrossRef]
  13. Sels, S.; Bogaerts, B.; Vanlanduit, S.; Penne, R. Extrinsic Calibration of a Laser Galvanometric Setup and a Range Camera. Sensors 2018, 18, 1478. [Google Scholar] [CrossRef] [PubMed]
  14. Tu, J.; Zhang, L. Effective Data-Driven Calibration for a Galvanometric Laser Scanning System Using Binocular Stereo Vision. Sensors 2018, 18, 197. [Google Scholar] [CrossRef] [PubMed]
  15. Wissel, T.; Wagner, B.; Stüber, P.; Schweikard, A.; Ernst, F. Data-Driven Learning for Calibrating Galvanometric Laser Scanners. IEEE Sens. J. 2015, 15, 5709–5717. [Google Scholar] [CrossRef]
  16. Quéau, Y.; Leporcq, F.; Alfalou, A. Design and simplified calibration of a Mueller imaging polarimeter for material classification. Opt. Lett. 2018, 43, 4941–4944. [Google Scholar] [CrossRef] [PubMed]
  17. Triggs, B.; McLauchlan, P.F.; Hartley, R.I.; Fitzgibbon, A.W. Bundle Adjustment—A Modern Synthesis. In Vision Algorithms: Theory and Practice; Triggs, B., Zisserman, A., Szeliski, R., Eds.; Springer: Berlin/Heidelberg, Germany, 2000; pp. 298–372. [Google Scholar]
  18. Vasconcelos, F.; Barreto, J.P.; Nunes, U. A Minimal Solution for the Extrinsic Calibration of a Camera and a Laser-Rangefinder. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2097–2107. [Google Scholar] [CrossRef] [PubMed]
  19. Scaramuzza, D.; Harati, A.; Siegwart, R. Extrinsic self calibration of a camera and a 3D laser range finder from natural scenes. In Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS’07), San Diego, CA, USA, 29 October–2 November 2007; pp. 4164–4169. [Google Scholar]
  20. Lepetit, V.; Moreno-Noguer, F.; Fua, P. EPnP: An Accurate O(n) Solution to the PnP Problem. Int. J. Comput. Vis. 2008, 81, 155. [Google Scholar] [CrossRef]
  21. Trucco, E.; Verri, A. Introductory Techniques for 3-D Computer Vision; Prentice Hall: New Jersey, NJ, USA, 1998; Volume 201. [Google Scholar]
  22. Marquardt, D. An Algorithm for Least-Squares Estimation of Nonlinear Parameters. J. Soc. Ind. Appl. Math. 1963, 11, 431–441. [Google Scholar] [CrossRef]
  23. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef] [Green Version]
  24. Heikkila, J.; Silven, O. A four-step camera calibration procedure with implicit image correction. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, PR, USA, 17–19 June 1997; pp. 1106–1112. [Google Scholar] [CrossRef]
  25. Huber, P.J. Robust Estimation of a Location Parameter. Ann. Math. Stat. 1964, 35, 73–101. [Google Scholar] [CrossRef]
Figure 1. System to be calibrated and its parameters. Laser beams emitted from the laser Doppler velocimeter (LDV) are reflected by the galvanometer scanner and mirror array MR1~MR4, and finally hit a moving-target object. By scanning the laser on the target surface, the six-degree-of-freedom (6-DOF) speed of the target could be reconstructed. Parameters consist of laser geometry, denoted by the x and y co-ordinates on the z = 0 plane, and the laser-direction angles.
Figure 1. System to be calibrated and its parameters. Laser beams emitted from the laser Doppler velocimeter (LDV) are reflected by the galvanometer scanner and mirror array MR1~MR4, and finally hit a moving-target object. By scanning the laser on the target surface, the six-degree-of-freedom (6-DOF) speed of the target could be reconstructed. Parameters consist of laser geometry, denoted by the x and y co-ordinates on the z = 0 plane, and the laser-direction angles.
Sensors 19 00582 g001
Figure 2. Geometric-only calibration procedures.
Figure 2. Geometric-only calibration procedures.
Sensors 19 00582 g002
Figure 3. Experimental setup. Photograph of the system and the experimental condition. MR1~MR4 are the mirrors, and GS is the galvanometer scanner. IR1~IR3 are IR illumination sources and cameras. XI is a monochrome camera. Calibration object was moved manually 1 m away from the system.
Figure 3. Experimental setup. Photograph of the system and the experimental condition. MR1~MR4 are the mirrors, and GS is the galvanometer scanner. IR1~IR3 are IR illumination sources and cameras. XI is a monochrome camera. Calibration object was moved manually 1 m away from the system.
Sensors 19 00582 g003
Figure 4. (a) Calibration object, a white board with ten randomly but asymmetrically placed retroreflective markers. (b) Six motion patterns of the calibration object in its local co-ordinate system, of which the calibration data are composed.
Figure 4. (a) Calibration object, a white board with ten randomly but asymmetrically placed retroreflective markers. (b) Six motion patterns of the calibration object in its local co-ordinate system, of which the calibration data are composed.
Sensors 19 00582 g004
Figure 5. Motion reconstruction results of the test set. Specifically, this figure includes the rotational components (top three subfigures) and translational components (bottom three subfigures) of the interframe motion increments in the test set. It was confirmed that the proposed method obviously outperforms the geometric-only method.
Figure 5. Motion reconstruction results of the test set. Specifically, this figure includes the rotational components (top three subfigures) and translational components (bottom three subfigures) of the interframe motion increments in the test set. It was confirmed that the proposed method obviously outperforms the geometric-only method.
Sensors 19 00582 g005
Figure 6. (a) Used test globe. Retroreflective markers (red-circled) were attached on the globe to track its rotation. (b) Motion-reconstruction results (rotational part).
Figure 6. (a) Used test globe. Retroreflective markers (red-circled) were attached on the globe to track its rotation. (b) Motion-reconstruction results (rotational part).
Sensors 19 00582 g006
Table 1. -norm of submatrices under different motions.
Table 1. -norm of submatrices under different motions.
Motion Pattern ω / a ω / o v / a v / o
S u b t l e 0.00360.00120.2800.2201
R o t 0.00880.04240.5013.39
T r a n s 1.31099.20
Table 2. Root mean square (RMS) errors in the test data.
Table 2. Root mean square (RMS) errors in the test data.
Test SetRotational (Rad)Translational (mm)
ProposedGeoProposedGeo
R o t x 0.01390.04842.31429.0864
R o t y 0.01080.03212.05434.9750
R o t z 0.02390.03213.82637.6578
T r a n s x 0.01010.00893.17513.4187
T r a n s y 0.00770.02201.90443.8176
T r a n s z 0.00540.00751.60142.7850
T o t a l 0.01330.02902.59775.7731

Share and Cite

MDPI and ACS Style

Hu, Y.; Miyashita, L.; Watanabe, Y.; Ishikawa, M. Visual Calibration for Multiview Laser Doppler Speed Sensing. Sensors 2019, 19, 582. https://doi.org/10.3390/s19030582

AMA Style

Hu Y, Miyashita L, Watanabe Y, Ishikawa M. Visual Calibration for Multiview Laser Doppler Speed Sensing. Sensors. 2019; 19(3):582. https://doi.org/10.3390/s19030582

Chicago/Turabian Style

Hu, Yunpu, Leo Miyashita, Yoshihiro Watanabe, and Masatoshi Ishikawa. 2019. "Visual Calibration for Multiview Laser Doppler Speed Sensing" Sensors 19, no. 3: 582. https://doi.org/10.3390/s19030582

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop