Next Article in Journal
Enhanced Detection of Leishmania Parasites in Microscopic Images Using Machine Learning Models
Next Article in Special Issue
Cloud/VPN-Based Remote Control of a Modular Production System Assisted by a Mobile Cyber–Physical Robotic System—Digital Twin Approach
Previous Article in Journal
A Systematic Review on the Advancements in Remote Sensing and Proximity Tools for Grapevine Disease Detection
Previous Article in Special Issue
Design and Evaluation of Augmented Reality-Enhanced Robotic System for Epidural Interventions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Robust Method for Validating Orientation Sensors Using a Robot Arm as a High-Precision Reference

Antal Bejczy Center for Intelligent Robotics, Obuda University, 1034 Budapest, Hungary
*
Authors to whom correspondence should be addressed.
Sensors 2024, 24(24), 8179; https://doi.org/10.3390/s24248179
Submission received: 8 October 2024 / Revised: 14 December 2024 / Accepted: 17 December 2024 / Published: 21 December 2024

Abstract

:
This paper presents a robust and efficient method for validating the accuracy of orientation sensors commonly used in practical applications, leveraging measurements from a commercial robotic manipulator as a high-precision reference. The key concept lies in determining the rotational transformations between the robot’s base frame and the sensor’s reference, as well as between the TCP (Tool Center Point) frame and the sensor frame, without requiring precise alignment. Key advantages of the proposed method include its independence from the exact measurement of rotations between the reference instrumentation and the sensor, systematic testing capabilities, and the ability to produce repeatable excitation patterns under controlled conditions. This approach enables automated, high-precision, and comparative evaluation of various orientation sensing devices in a reproducible manner. Moreover, it facilitates efficient calibration and analysis of sensor errors, such as drift, noise, and response delays under various motion conditions. The method’s effectiveness is demonstrated through experimental validation of an Inertial Navigation System module and the SLAM-IMU fusion capabilities of the HTC VIVE VR headset, highlighting its versatility and reliability in addressing the challenges associated with orientation sensor validation.

1. Introduction

Orientation sensors, whether standalone or integrated into pose-tracking systems, are extensively used in drones, mobile robots, medical applications [1], and immersive visualization devices [2]. These sensors provide absolute measurements, differential rotations, or a combination of both. For instance, Inertial Navigation Systems integrate data from a gyroscope, accelerometer, and magnetometer, while 3D SLAM applications can leverage IMU data alongside odometry for precise localization and mapping.
Sensor systems with stable external references like motion capture cameras, VR tracking devices, and electromagnetic trackers provide reliable absolute orientation with moderate noise levels. However, the increasingly popular MEMS-based and the so-called inside-out vision-based systems are less robust against noise, bias, drift, and delay. For example, concerning MEMS sensors, Shi et al. pointed out that environmental factors, particularly ambient temperature, significantly affect the bias and drift of these sensors, which can lead to decreased orientation estimation accuracy [3]. Another analysis by Suvorkin et al. utilized the Allan deviation slope method to evaluate noise characteristics across different grades of sensors. This analysis provides insights into the stability and reliability of IMU measurements over time, which is essential for applications requiring high precision [4]. Understanding noise characteristics is crucial for interpreting IMU data accurately and ensuring its validity in practical applications. With the emergence of more advanced observer-based and dynamic filtering techniques for enhancing raw IMU data, such as those proposed in [5,6], the need for repeatable and automated validation methods has become increasingly important.
Inside-out tracking systems, such as those used in Microsoft’s HoloLens or HTC Vive, rely on onboard cameras to track the environment and determine orientation. While these systems offer the advantage of not requiring external infrastructure, they are highly sensitive to environmental conditions, such as lighting and occlusions. A study by Zhang et al. found that inside-out tracking systems could experience substantial errors in orientation estimation when subjected to rapid movements or changes in lighting conditions [7]. Additionally, the latency introduced by image processing can lead to noticeable delays in orientation updates, impacting user experience. Similarly, a paper by Niehorster et al. on the HTC Vive’s inside-out tracking system highlighted that while the precision of tracking measurements is high, the system can experience significant offsets when tracking is temporarily lost, affecting overall accuracy [8]. Other studies using the HTV Vive tracker give similar conclusions [9,10]. This illustrates the challenges faced by systems that rely on internal measurements compared with those that utilize stable external references.
These imperfections underline the importance of validation methodologies that allow for precise and automatic comparisons of various sensors under multiple conditions in a repeatable way. The validation of orientation sensors relies heavily on the establishment of ground truth data. Ground truth serves as a reference against which the performance of the sensors can be assessed. Various methods for obtaining ground truth have been documented in the literature, each with its unique advantages and applications.
One of the most reliable forms of ground truth for validating IMU orientation sensors is the use of motion capture systems. These systems provide high-precision data regarding the position and orientation of subjects in three-dimensional space. For instance, Morrow et al. conducted a validation study where they compared the kinematic data obtained from a commercially available IMU system against a standard optical motion capture system [11]. Inside-out tracking systems have been validated in similar setups [9,12]. This comparative approach is widely recognized for its effectiveness if a motion capture laboratory is available.
Another approach to obtaining reliable references is the use of mechanical fixtures. The use of fixtures in experimental setups allows for controlled conditions that can significantly enhance the reliability of sensor validation processes. Bliley et al. [13] described a validation method for wearable motion-sensing devices that involved a mechanical fixture. Their approach utilized a platform rotated by a servo motor, which allowed for precise control over the orientation of the sensors being tested. This method ensured that the sensors were subjected to known and repeatable movements, facilitating accurate comparisons between the sensor outputs and the expected values. Additionally, Eastman et al. [14] highlighted the importance of using fixtures in their work on 6DoF (six degrees of freedom) object pose ground truth measurements. They employed an aluminum pose fixture along with a laser tracker to establish a reliable reference for validating sensor outputs. This combination of a rigid fixture and precise measurement tools allowed for high accuracy in determining the orientation and position of objects. Similarly, Herickhoff et al. [15] utilized a lightweight plastic fixture to hold an IMU sensor. The adaptability of such fixtures makes them valuable tools in various applications, including medical imaging and motion analysis.
These published methods lack the ability to provide a programmatic way to generate test motions and then analyze the results in a tractable manner. Using a robot arm as a ground truth reference would provide important benefits: the robot arm can provide precise and repeatable movements, allowing for accurate comparisons between the sensor outputs and the known positions and orientations of the arm. This motivation has been supported by studies from the literature in the field. Schmidt and Kraft noted that accurate ground truth trajectories can be obtained when a camera is mounted on an industrial robot arm, which allows for reliable evaluations of visual SLAM (Simultaneous Localization and Mapping) algorithms [16]. Several papers, such as [17,18,19], highlight the advantages of using robot arms to create motion patterns for testing IMUs. A notable method proposed by Hoslip et al. employs an industrial robot to evaluate the performance of multiple IMU sensors simultaneously [20]. However, despite the use of various robotic manipulators, no systematic approach has been published to mathematically address the geometric relationships between the robot’s reference frame and the sensor’s reference frame.
Authors faced this unsolved issue during the experimental validation of a generic direction and orientation tracking method based on an IMU sensor, which has been published in [21]. In our former work, we proposed a mathematical framework for the calibration and tracking of objects with one functional direction (continuous rotational symmetric case) and multiple functional directions (non-symmetric case). This previous work has inspired the current study which proposes a method for the validation of orientation sensors with respect to an external reference. The proposed method is agnostic to the external (reference) measurement system, but it is discussed considering a robotic arm as a robust integrated actuation and sensing device, and the robotics terminology is used throughout the discussion.
In our setup, a commercial robot arm is employed for validation, providing a precise orientation reference and enabling automated testing capabilities (Figure 1). This approach not only ensures high accuracy but also facilitates systematic and repeatable validation processes.
The validation of orientation sensors using any test setup introduces the challenge of dealing with different reference and target frames. The sensor initializes its own unknown reference frame and measures the orientation of a frame fixed to the sensor relative to this reference. The analogous frames of an industrial manipulator are called base frame and TCP frame, respectively (see Figure 2). We can suppose that the industrial robot’s internal measurement system provides reliable position and accurate orientation information regarding the base to TCP transformation [22,23].
As its main contribution, this paper proposes a tractable method to approximate the rotation transformation between the robot’s base frame and the sensor frame, as well as between the TCP (Tool Center Point) frame and the sensor frame. The method involves two main steps: first, deriving an initial estimate using geometric principles, followed by a local optimization to minimize the impact of measurement noise and errors. The proposed method relies solely on easily obtainable logged data of the robot’s TCP orientations and the sensor’s outputs, enabling automated and repeatable testing under a wide range of operational conditions and motion dynamics.
To demonstrate the method’s effectiveness, experiments were conducted using a UR16e robot [24] with two distinct sensors: the widely used ICM20948 IMU sensor [25] and the HTC VIVE headset’s inside-out tracking system [8,26,27,28].
This paper is organized as follows: Section 2 defines the problem and introduces the relevant notations. Section 3 elaborates on the proposed calibration method. Section 4 presents experimental results obtained with various sensors and provides a comparative analysis. Section 5 outlines future research activities and potential practical applications. Finally, Section 6 summarizes the findings. Appendix A includes details on vector and quaternion operations, along with their respective notations, as used throughout the paper. Original measurement data associated with the content of this paper can be found at [29].

2. Problem Description

In the investigated measurement setup, the q ( b a s e , T C P ) ( t ) orientation can be obtained from the pose of the manipulator, while the sensor provides its q ( r e f , s e n s o r ) ( t ) orientation with respect to its own reference frame; see Figure 2.
Note that the orientation sensor can be attached to the robot’s end-effector in any arbitrary relative orientation. There are no restrictions on this relative orientation, as it does not affect the calibration accuracy. However, it is essential that the relative orientation remains unchanged throughout the entire measurement process.
In an ideal case, there exist q ( b a s e , r e f ) and q ( T C P , s e n s o r ) orientations such that
q ( b a s e , T C P ) ( t ) · q ( T C P , s e n s o r ) = q ( b a s e , r e f ) · q ( r e f , s e n s o r ) ( t )
for all t [ 0 , t m e a s ] . However, in practical scenarios, there is an error resulting from the measurement noises and the drift of the sensor.
In the following, the q ( b a s e , T C P ) ( t ) value (read from the robot controller) will be considered as ground truth. The computation of q ( T C P , s e n s o r ) and q ( b a s e , r e f ) orientations from some initial data will be referred to as calibration.
Considering the whole recorded dataset, the angle error of Equation (1) will be analyzed to obtain the measurement noise and the drift of the sensor.

3. Calibration Method

The calibration is performed using solely the first 0 t < t c a l values of the measured data. This approach enables the analysis of sensor drift in the subsequent measurements for t t c a l .
The method requires at least three values from the t < t c a l portion of the measured data, each rotated by an angle 60 [ deg ] φ 120 [ deg ] relative to the others. Denote the time of these measurements as t A , t B , and t C .
The calibration process consists of two stages: first, obtaining a good initial guess using the measurements at t A , t B , and t C ; next, refining this initial estimate with all measurements for t < t c a l . The following subsections provide a detailed description of these steps.
To minimize potential inaccuracies of the calibration resulting from processing and communication delays during the parallel recording of the robot’s and the sensor’s orientations, the robot’s motion program should be designed carefully. Specifically, in the t < t c a l interval, the end-effector should move at a low angular velocity and remain steady at t A , t B , and t C whenever possible.

3.1. Initial Guess

Consider the sensor pose at t A , t B and t C . The axis of rotation between two orientations gives a well-defined and easily obtainable common direction for both the sensor and the TCP.
For this, compute the direction of the axes as
[ t A B ( T C P ) , φ A B ] = = axis _ angle ( q ( b a s e , T C P ) ( t A ) 1 q ( b a s e , T C P ) ( t B ) ) ,
if the computation results in a negative φ A B angle, flip the direction of t A B ( T C P ) . (The computation requires that φ A B , φ B C , and the angles of t A B and t B C be as close to the right angle as possible.) Furthermore, vectors t B C ( T C P ) , t A B ( s e n s o r ) and t B C ( s e n s o r ) can be computed similarly.
Then, these values can be obtained by b a s e and r e f frames as
t A B ( b a s e ) = q ( b a s e , T C P ) ( t B ) · t A B ( T C P ) , t A B ( r e f ) = q ( r e f , s e n s o r ) ( t B ) · t A B ( s e n s o r ) , t B C ( b a s e ) = q ( b a s e , T C P ) ( t B ) · t B C ( T C P ) , t B C ( r e f ) = q ( r e f , s e n s o r ) ( t B ) · t B C ( s e n s o r ) .
Now the t A B and t B C directions are known by all frames; the following orthogonal directions can be defined based on them:
i 1 = ( t A B + t B C ) n o r m , i 2 = ( t A B × t B C ) n o r m , i 3 = i 1 × i 2 ,
which constructs a frame denoted by i. The method and the resulting frame are illustrated in Figure 3.
Determining these base vectors from the vectors given by the r e f frame, the rotation matrix between the frames r e f and i can be described as
R ( r e f , i ) = i 1 i 2 i 3 .
It can also be computed for frames b a s e , T C P and s e n s o r as R ( b a s e , i ) , R ( T C P , i ) and R ( s e n s o r , i ) .
Then, the initial guess for quaternions q ( b a s e , r e f ) and q ( T C P , s e n s o r ) can be computed as
q 0 ( b a s e , r e f ) = rotm 2 quat ( R ( b a s e , i ) · ( R ( r e f , i ) ) T ) ,
q 0 ( T C P , s e n s o r ) = rotm 2 quat ( R ( T C P , i ) · ( R ( s e n s o r , i ) ) T ) .

3.2. Local Optimization

The previous step provided a good calibration for the three selected measurements. The local optimization step refines this calibration by taking into account all measurements for 0 t < t c a l .
The correction of the orientations is defined with Roll–Pitch–Yaw variables ( x )
q R P Y ( x ) = cos ( x 1 / 2 ) + i sin ( x 1 / 2 ) · cos ( x 2 / 2 ) + j sin ( x 2 / 2 ) · cos ( x 3 / 2 ) + k sin ( x 3 / 2 ) ,
where small angles are assumed due to the good initial guess.
With this correction term, the quaternions can be defined as
q ( b a s e , r e f ) ( x ) = q R P Y ( x ) · q 0 ( b a s e , r e f ) ,
q ( T C P , s e n s o r ) ( y ) = q R P Y ( y ) · q 0 ( T C P , s e n s o r ) .
According to the parameters x and y , the measurements can be evaluated as
q ( b a s e , s e n s o r b y r o b o t ) ( t , x ) = q ( b a s e , r e f ) ( x ) q ( r e f , s e n s o r ) ( t ) , q ( b a s e , s e n s o r b y s e n s o r ) ( t , y ) = q ( b a s e , T C P ) ( t ) q ( T C P , s e n s o r ) ( y ) ,
then the error of the measurement can be computed as the angle of a quaternion
ϵ ( t , x , y ) = angle ( q ( s e n s o r b y s e n s o r , s e n s o r b y r o b o t ) ( t , x , y ) ) ,
where
q ( s e n s o r b y s e n s o r , s e n s o r b y r o b o t ) ( t , x , y ) = q ( b a s e , s e n s o r b y s e n s o r ) ( t , y ) 1 · q ( b a s e , s e n s o r b y r o b o t ) ( t , x ) .
Based on this derivation, this optimization problem can be written as
minimize x R 3 , y R 3 t < t c a l ϵ ( t , x , y ) 2 ,
where a Nelder–Mead optimization can be used initialized from x = 0 , y = 0 .
From the resulting x , y vectors, the error of a measurement can be written as
ϵ ( t ) = ϵ ( t , x , y ) .

4. Experimental Demonstration

This section presents examples of validating various orientation sensors using a UR16e manipulator [24] with the proposed method. The results provide a straightforward comparison of the devices’ performance. We investigated two sensors, a MEMS-based IMU sensor and the HTC VIVE VR headset (Manufacturer: TDK INVENSENSE (San Jose, CA, USA) Type ID: ICM-20948) equipped with 6DoF tracking capabilities (so-called inside-out tracking).

4.1. ICM20948 IMU with Disabled Magnetometer

The IMU sensor ICM20948 [25] used in the experiment integrates gyroscope, magnetometer, and accelerometer units in a single package. For this measurement, the magnetometer was turned off because of the disturbing magnetic field of the manipulator. It was found that the micro-vibrations of the manipulator can influence the performance of the sensor to a considerable extent causing noise and drift in the output signal. For this reason, the sensor was attached to the manipulator in silicon bedding instead of using a rigid mounting fixture.
Furthermore, the effect of these vibrations was also diminished by applying a higher robot speed (2–4 [cm]/step). Figure 4 shows the path of the manipulator. After an initial motion phase, it performs five full circles. The figure also shows the orientation in each measurement point with red line segments in the x direction and blue segments in the z direction.
First, three measurements (samples recorded at t A , t B , and t C ) must be chosen as close to perpendicular as possible.
It was obtained by performing a search through the t < t c a l section of the logged data, according to the following optimum criteria (see Section 3.1):
minimize t A , t B , t C < t c a l max ( | δ A B | , | δ B C | , | δ t | ) subject to δ A B = π / 2 φ A B , δ B C = π / 2 φ B C , δ t = π / 2 angle ( t A B , t B C ) ,
where in optimal case δ = max ( | δ A B | , | δ B C | , | δ t | ) = 0 , so the angles are perpendicular.
On the initial phase of the path ( t < t c a l ), the procedure automatically looks for three measured orientations that are inclined at least 60 degrees and less than 120 degrees relative to each other (in this case, their indices were i A = 1 , i B = 374 and i C = 457 ). The angle differences between the orientations are φ A B = 89.65 [ deg ] and φ B C = 89.29 [ deg ] . Furthermore, the angle of t A B and t B C is 89.68 [ deg ] because of the initial path segment, and δ is almost zero.
With the results of calibration, the ϵ ( t ) error of the orientation of the IMU sensor (considering the robot as a reference) can be computed, and it is plotted in Figure 5 with blue. The histogram of the error is plotted in Figure 6. They clearly show that the error is mostly below 1.5 [ deg ] , and by plotting the magnitude of rotations between the measurements (in red in Figure 5), it is easy to see that significant errors are only measured during fast rotations. These errors are partially caused by the unavoidable time difference between signal processing in the robot and the sensors that result in a certain time delay between the parallel orientation samples.
It should be emphasized that the contribution of this paper is confined to the method and procedure for registering the robot’s and the orientation sensor’s references. Consequently, the investigation of the time delay is intentionally considered beyond the scope of this work. Measuring or identifying time delays in commercial systems (e.g., robot controllers; IMU’s internal signal processing, etc.), which are out of the researchers’ control, may not even be feasible in general. However, using the proposed procedure, a comparative delay analysis can be performed with multiple orientation sensors mounted on the robot simultaneously or consecutively. Such an analysis could be valuable from an engineering perspective when evaluating sensing technologies for specific applications.

4.2. HTC VIVE (IMU and SLAM Sensor-Fusion Using 6 Cameras)

The HTC VIVE Cosmos headset uses six cameras and an IMU unit for its SLAM algorithm and can be considered as a cutting-edge inside-out tracking technology. The headset was fixed to the flange of the manipulator and was moved on the path plotted in Figure 4, without the initial segment. Because it was not sensitive to vibrations of slow robot motions, two rounds were recorded at different speeds.
In this case, an automatic search was also applied to obtain orientation samples with 60 [ deg ] φ 120 [ deg ] angles between them. The indices of the chosen measurements were i A = 1 , i B = 228 and i C = 1462 , and the angles between them are φ A B = 30.03 [ deg ] and φ B C = 36.29 [ deg ] . Furthermore, the angle of t A B and t B C is 30.01 [ deg ] , so in this case, δ = 60 [ deg ] .
After performing the calibration method described in Section 3, the computation results in ϵ ( t ) errors plotted in Figure 7 in blue and in the histogram of Figure 8. The figures show that the error is mostly below 0.6 [ deg ] . The rotation angle between the measurements is also plotted in Figure 7 with red, and it clearly shows that the larger differences between the sensors are measured only if the device is rotating.

4.3. Notes on the Robot Trajectory

The proposed method is designed to be robust to the robot’s motion during measurement, provided that the key poses required for calibration are carefully selected (See Section 3). Specifically, the robot path must be planned to avoid kinematic singularities, as approaching these configurations can lead to large joint velocities, causing vibrations and potentially affecting the accuracy of orientation measurements.
Outside of these singularities, the robot’s path has no impact on the algorithm’s sensitivity, since the calibration process depends solely on the relative angular separation between the key poses. As long as these key poses are properly captured during the calibration phase and the relative orientation between the robot’s end-effector and the sensor is preserved, the method will maintain its reliability. By ensuring smooth, controlled robot motions and steering clear of singular configurations, the robustness of the proposed method can be upheld across various trajectories.

5. Future Work

Concerning further research, two directions are considered. The first goal is performing experimental work on the validation of standalone 6DoF tracking sensors, like [30], on a similar robotic setup, extending the investigation to positioning accuracy. The other research direction aims to apply the presented approach for the geometric calibration of manipulators, as well as the real-time monitoring of the consistency between the joint trajectory and the Cartesian trajectory of robot manipulators with a known kinematic model. Such a monitoring method can be beneficially applied in mission-critical medical robotic systems, e.g., the CyberKnife [31].

6. Conclusions

This paper proposed a generic method to validate orientation sensors using an independent reference of higher accuracy. The method has two main steps. First, a good initial guess is computed via simple geometric computation, which is followed by a refinement via local optimization starting from the previously computed initial guess. The method is presented in a generic manner, enabling its application with any kind of programmable manipulator and the associated orientation measurement instrumentation. Commercial robot arms are compact and reliable devices suitable for this purpose. The main benefits of the proposed method include the repeatable excitation pattern that allows for the comparison of various orientation sensors under the same conditions regarding the orientation trajectory, external disturbances, visual environment, etc. The main limitation of the method is its reliance on the assumption that the robot’s orientation measurements are perfectly accurate. Furthermore, the method may encounter challenges when applied to sensors with significant drift, as this can lead to larger discrepancies during the calibration process. Nevertheless, these limitations are generally not critical for most engineering applications. The method is illustrated by validating an IMU (ICM20948) and a complex inside-out tracking system (HTC VIVE Cosmos) with a UR16e manipulator. The experiments show that concerning the investigated sensors, significant orientation errors only occur at larger angular velocities.

Author Contributions

Conceptualization, P.G.; methodology, J.K. and P.G.; software, J.K. and T.P.; validation, J.K.; formal analysis, J.K.; investigation, J.K., T.P. and P.G.; resources, P.G.; data curation, J.K.; writing—original draft preparation, J.K.; writing—review and editing, P.G.; supervision, P.G. All authors have read and agreed to the published version of the manuscript.

Funding

Péter Galambos is supported by the ÚNKP-23-5 (Bolyai+) New National Excellence Program of the Ministry for Innovation and Technology from the source of the National Research, Development and Innovation Fund. Péter Galambos is a Bolyai Fellow of the Hungarian Academy of Sciences. This research was supported by the Consolidator Researcher Grant of Obuda University (Grant ID: OKPPKPC004).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Measurement data and supplementary photo documentation is available at: https://github.com/ABC-iRobotics/OrientationSensorValidatinon-PublicData-MDPI-Sensors (accessed on 20 October 2024).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

The orientations is described via unit quaternions. The main operations for their usage are discussed here. Denote the unit quaternion as
q = w + x i + y j + z k ,
where w 2 + x 2 + y 2 + z 2 = 1 . The related rotation matrix can be written as
R q = 1 2 ( y 2 + z 2 ) 2 x y 2 z w 2 x z + 2 y w 2 x y + 2 z w 1 2 ( x 2 + z 2 ) 2 z y 2 x w 2 x z 2 y w 2 y z + 2 x w 1 2 ( x 2 + y 2 ) .
The quaternion form a rotation matrix can be computed via the Stephenson formula; see [32]. The quaternion of a rotated orientation around axis t by angle φ is
q = cos ( φ / 2 ) + sin ( ϕ / 2 ) ( t x i + t y j + t z k ) ,
and its opposite too. The angle from a quaternion can be computed as
φ = 2 · a t a n 2 ( x 3 2 + y 3 2 + z 3 2 , w 3 ) ,
and the axis from the direction of the complex part, taking into account the sign of the real part.
The rotation of a vector v by a q quaternion ( q · v ) can be computed as R q · v matrix product.
If a vector represents a v direction described by frame A, it is written as v ( A ) .
Furthermore, the indexing of rotation matrices and quaternions must be understood as
v ( B ) = q ( B , A ) · v ( A ) , and v ( B ) = R ( B , A ) · v ( A ) .
The quaternion that represents the inverse rotation can be computed as
q 1 = w + x i + y j + z k .
Denote another unit quaternion as
q 2 = w 2 + x 2 i + y 2 j + z 2 k ,
The product of rotations q and q 2 for computations like q · ( q 2 · v ) can be computed as
q · q 2 = w w 2 x x 2 y y 2 z z 2 + i ( w x 2 + x w 2 y z 2 + z y 2 ) + j ( w y 2 + x z 2 + y w 2 z x 2 ) + k ( w z 2 x y 2 + y x 2 + z w 2 ) .
The angle of orientations A and B can be computed as the angle of quaternion q ( A , B ) .
The normalization of a vector is denoted as
( v ) n o r m = v / | | v | | .
The angle of two vectors v , v 2 is computed as
φ = acos ( v T v 2 / ( | | v | | · | | v 2 | | ) ) .

References

  1. Nijkamp, J.; Schermers, B.; Schmitz, S.; Jonge, C.S.d.; Kuhlmann, K.F.D.; Heijden, F.v.d.; Sonke, J.; Ruers, T.J. Comparing position and orientation accuracy of different electromagnetic sensors for tracking during interventions. Int. J. Comput. Assist. Radiol. Surg. 2016, 11, 1487–1498. [Google Scholar] [CrossRef] [PubMed]
  2. Rumiński, D.; Maik, M.; Walczak, K. Visualizing Financial Stock Data within an Augmented Reality Trading Environment. Acta Polytech. Hung. 2019, 16, 223–239. [Google Scholar] [CrossRef]
  3. Shi, L.; Xun, J.; Chen, S.; Zhao, L.; Shi, Y. An orientation estimation algorithm based on multi-source information fusion. Meas. Sci. Technol. 2018, 29, 115101. [Google Scholar] [CrossRef]
  4. Suvorkin, V.; Garcia-Fernandez, M.; González-Casado, G.; Li, M.; Rovira-Garcia, A. Assessment of noise of mems imu sensors of different grades for gnss/imu navigation. Sensors 2024, 24, 1953. [Google Scholar] [CrossRef] [PubMed]
  5. Shkel, A.M.; Wang, Y. Inertial Sensors and Inertial Measurement Units. In Pedestrian Inertial Navigation with Self-Contained Aiding; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2021; Chapter 2; pp. 17–36. [Google Scholar] [CrossRef]
  6. D’Ippolito, F.; Garraffa, G.; Sferlazza, A.; Zaccarian, L. A hybrid observer for localization from noisy inertial data and sporadic position measurements. Nonlinear Anal. Hybrid Syst. 2023, 49, 101360. [Google Scholar] [CrossRef]
  7. Huo, Y.; Zhang, W.; Zhang, J.; Yang, H.J. Using microseismic events to improve the accuracy of sensor orientation for downhole microseismic monitoring. Geophys. Prospect. 2021, 69, 1167–1180. [Google Scholar] [CrossRef]
  8. Niehorster, D.; Li, L.; Lappe, M. The accuracy and precision of position and orientation tracking in the htc vive virtual reality system for scientific research. I-Perception 2017, 8, 204166951770820. [Google Scholar] [CrossRef]
  9. Veen, S.M.v.d.; Bordeleau, M.; Pidcoe, P.E.; Thomas, J.S. Agreement analysis between vive and vicon systems to monitor lumbar postural changes. Sensors 2019, 19, 3632. [Google Scholar] [CrossRef]
  10. Sylcott, B.; Williams, K.R.; Hinderaker, M.; Lin, C. Comparison of htc vive™ virtual reality headset position measures to center of pressure measures. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2019, 63, 2333–2336. [Google Scholar] [CrossRef]
  11. Morrow, M.M.; Lowndes, B.R.; Fortune, E.; Kaufman, K.R.; Hallbeck, M.S. Validation of inertial measurement units for upper body kinematics. J. Appl. Biomech. 2017, 33, 227–232. [Google Scholar] [CrossRef]
  12. Vox, J.P.; Weber, A.; Wolf, K.I.; Izdebski, K.; Schüler, T.; König, P.; Wallhoff, F.; Friemert, D. An evaluation of motion trackers with virtual reality sensor technology in comparison to a marker-based motion capture system based on joint angles for ergonomic risk assessment. Sensors 2021, 21, 3145. [Google Scholar] [CrossRef] [PubMed]
  13. Bliley, K.; Kaufman, K.R.; Gilbert, B.K. Methods for validating the performance of wearable motion-sensing devices under controlled conditions. Meas. Sci. Technol. 2009, 20, 045802. [Google Scholar] [CrossRef]
  14. Eastman, J.D.; Marvel, J.A.; Falco, J.A.; Hong, T.H. Measurement science for 6dof object pose ground truth. In Proceedings of the 2013 IEEE International Symposium on Robotic and Sensors Environments (ROSE), Washington, DC, USA, 21–23 October 2013. [Google Scholar] [CrossRef]
  15. Herickhoff, C.D.; Morgan, M.R.; Broder, J.; Dahl, J.J. Low-cost volumetric ultrasound by augmentation of 2d systems: Design and prototype. Ultrason. Imaging 2017, 40, 35–48. [Google Scholar] [CrossRef] [PubMed]
  16. Schmidt, A.; Kraft, M. The impact of the image feature detector and descriptor choice on visual slam accuracy. In Advances in Intelligent Systems and Computing; Springer: Cham, Switzerland, 2015; pp. 203–210. [Google Scholar] [CrossRef]
  17. Białecka, M.; Gruszczyński, K.; Cisowski, P.; Kaszyński, J.; Baka, C.; Lubiatowski, P. Shoulder Range of Motion Measurement Using Inertial Measurement Unit—Validation with a Robot Arm. Sensors 2023, 23, 5364. [Google Scholar] [CrossRef] [PubMed]
  18. Kirking, B.; El-Gohary, M.; Kwon, Y. The feasibility of shoulder motion tracking during activities of daily living using inertial measurement units. Gait Posture 2016, 49, 47–53. [Google Scholar] [CrossRef] [PubMed]
  19. Botero-Valencia, J.; Marquez-Viloria, D.; Castano-Londono, L.; Morantes-Guzmán, L. A low-cost platform based on a robotic arm for parameters estimation of Inertial Measurement Units. Measurement 2017, 110, 257–262. [Google Scholar] [CrossRef]
  20. Hislop, J.; Isaksson, M.; McCormick, J.; Hensman, C. Validation of 3-Space Wireless Inertial Measurement Units Using an Industrial Robot. Sensors 2021, 21, 6858. [Google Scholar] [CrossRef]
  21. Kuti, J.; Piricz, T.; Galambos, P. Method for Direction and Orientation Tracking Using IMU Sensor. IFAC-PapersOnLine 2023, 56, 10774–10780. [Google Scholar] [CrossRef]
  22. Gao, G.; Zhang, H.; San, H.; Sun, G.; Wu, D.D.; Wang, W. Kinematic calibration for industrial robots using articulated arm coordinate machines. Int. J. Model. Identif. Control 2019, 31, 16. [Google Scholar] [CrossRef]
  23. Morsi, N.M.; Mata, M.; Harrison, C.S.; Semple, D. Autonomous robotic inspection system for drill holes tilt: Feasibility and development by advanced simulation and real testing. In Proceedings of the 2023 28th International Conference on Automation and Computing (ICAC), Birmingham, UK, 30 August–1 September 2023. [Google Scholar] [CrossRef]
  24. Universal Robots. Universal Robot UR16e. Available online: https://www.universal-robots.com/products/ur16-robot/ (accessed on 28 September 2024).
  25. TDK–InvenSense. ICM-20948 World’s Lowest Power 9-Axis MEMS MotionTracking Device. Available online: https://invensense.tdk.com/products/motion-tracking/9-axis/icm-20948/ (accessed on 28 September 2024).
  26. Borges, M.; Symington, A.; Coltin, B.; Smith, T.; Ventura, R. HTC vive: Analysis and accuracy improvement. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 2610–2615. [Google Scholar]
  27. Dempsey, P. The teardown: HTC Vive VR headset. Engin. Tech. 2016, 11, 80–81. [Google Scholar]
  28. Soffel, F.; Zank, M.; Kunz, A. Postural stability analysis in virtual reality using the HTC vive. In Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology, Munich Germany, 2–4 November 2016; pp. 351–352. [Google Scholar]
  29. Kuti, J.; Piricz, T.; Galambos, P. OrientationSensorValidatinon-PublicData-MDPI-Sensors. Available online: https://github.com/ABC-iRobotics/OrientationSensorValidatinon-PublicData-MDPI-Sensors (accessed on 28 September 2024).
  30. Sevensense Robotics AG. Alphasense Position. Available online: https://www.sevensense.ai/product/alphasense-position (accessed on 30 September 2024).
  31. Kilby, W.; Naylor, M.; Dooley, J.R.; Maurer, C.R., Jr.; Sayeh, S. A technical overview of the CyberKnife system. In Handbook of Robotic and Image-Guided Surgery; Elsevier: Amsterdam, The Netherlands, 2020; pp. 15–38. [Google Scholar]
  32. Shepperd, S.W. Quaternion from rotation matrix. J. Guid. Control 1978, 1, 223–224. [Google Scholar] [CrossRef]
Figure 1. HTC Vive Cosmos VR headset mounted on a UR16e robot.
Figure 1. HTC Vive Cosmos VR headset mounted on a UR16e robot.
Sensors 24 08179 g001
Figure 2. Robot manipulator with an orientation sensor (illustrated as a box on the flange) with the frames and measured quantities considered throughout this paper.
Figure 2. Robot manipulator with an orientation sensor (illustrated as a box on the flange) with the frames and measured quantities considered throughout this paper.
Sensors 24 08179 g002
Figure 3. The sensor (illustrated by a box) in three (A, B, C) poses, the axes of rotation, and the resulted frame i with base vectors i 1 , i 2 , and i 3 .
Figure 3. The sensor (illustrated by a box) in three (A, B, C) poses, the axes of rotation, and the resulted frame i with base vectors i 1 , i 2 , and i 3 .
Sensors 24 08179 g003
Figure 4. Three-Dimensional TCP path of the measurement. Red and blue segments show direction x and z respectively.
Figure 4. Three-Dimensional TCP path of the measurement. Red and blue segments show direction x and z respectively.
Sensors 24 08179 g004
Figure 5. The computed ϵ ( t ) error of the IMU orientation according to the orientation computed from the robot model (blue); rotation angle between the measured TCP orientations (red). Larger errors occur only during transient motion.
Figure 5. The computed ϵ ( t ) error of the IMU orientation according to the orientation computed from the robot model (blue); rotation angle between the measured TCP orientations (red). Larger errors occur only during transient motion.
Sensors 24 08179 g005
Figure 6. Histogram of the computed ϵ ( t ) error of the IMU orientation according to the orientation computed from the robot model.
Figure 6. Histogram of the computed ϵ ( t ) error of the IMU orientation according to the orientation computed from the robot model.
Sensors 24 08179 g006
Figure 7. The computed ϵ ( t ) error of the HTC VIVE orientation according to the orientation computed from the robot model (blue); rotation angle between the measured TCP orientations (red). Larger errors occur only during transient motion.
Figure 7. The computed ϵ ( t ) error of the HTC VIVE orientation according to the orientation computed from the robot model (blue); rotation angle between the measured TCP orientations (red). Larger errors occur only during transient motion.
Sensors 24 08179 g007
Figure 8. Histogram of the computed ϵ ( t ) error of the HTC VIVE orientation with respect to the orientation computed from the robot model.
Figure 8. Histogram of the computed ϵ ( t ) error of the HTC VIVE orientation with respect to the orientation computed from the robot model.
Sensors 24 08179 g008
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kuti, J.; Piricz, T.; Galambos, P. A Robust Method for Validating Orientation Sensors Using a Robot Arm as a High-Precision Reference. Sensors 2024, 24, 8179. https://doi.org/10.3390/s24248179

AMA Style

Kuti J, Piricz T, Galambos P. A Robust Method for Validating Orientation Sensors Using a Robot Arm as a High-Precision Reference. Sensors. 2024; 24(24):8179. https://doi.org/10.3390/s24248179

Chicago/Turabian Style

Kuti, József, Tamás Piricz, and Péter Galambos. 2024. "A Robust Method for Validating Orientation Sensors Using a Robot Arm as a High-Precision Reference" Sensors 24, no. 24: 8179. https://doi.org/10.3390/s24248179

APA Style

Kuti, J., Piricz, T., & Galambos, P. (2024). A Robust Method for Validating Orientation Sensors Using a Robot Arm as a High-Precision Reference. Sensors, 24(24), 8179. https://doi.org/10.3390/s24248179

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop