Next Article in Journal
Applications of Electromagnetic Waves: Present and Future
Previous Article in Journal
A Sub-Threshold Differential CMOS Schmitt Trigger with Adjustable Hysteresis Based on Body Bias Technique
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Position and Posture Measurements Using Laser Projection Markers for Infrastructure Inspection

Faculty of Engineering, University of Toyama, 3190 Gofuku, Toyama-shi, Toyama 930-8555, Japan
*
Author to whom correspondence should be addressed.
Electronics 2020, 9(5), 807; https://doi.org/10.3390/electronics9050807
Submission received: 7 April 2020 / Revised: 22 April 2020 / Accepted: 11 May 2020 / Published: 14 May 2020
(This article belongs to the Section Systems & Control Engineering)

Abstract

:
Infrastructure such as roads, tunnels and bridges needs periodical inspection. The conventional structure inspection method, in which a human inspector uses a crack gauge, may lead to problems such as measurement errors and lengthy inspection times. Mobile robots and image processing have been used in the infrastructure inspection field. An image sensor or any sensor is used for measuring the state of the infrastructures. It is necessary that the mobile robot knows its own position and posture during automatic inspection. Generally, the Global Positioning System (GPS) has been used to sense the position of the mobile robot. However, GPS is not usable in places with the ceilings such as under bridges and in tunnels. Therefore, we developed a system to sense the position and posture of mobile robots. This system uses laser projection markers and cameras, and it has a very simple configuration. The camera photographs a target structure and projection laser markers, and the position and posture of the camera and mobile robot were calculated by an image application. The system makes infrastructure inspection more effective and decreases the time needed for inspection. In this paper, we examined the number of necessary laser markers, and we verified our method by experiment.

1. Introduction

Mobile robots have been used in the infrastructure inspection field to save personnel and increase the efficiency of inspection [1,2,3,4]. It is necessary that the mobile robot knows its own position and posture during automatic inspection [5,6,7]. There are many sensors available for a mobile robot to know its own position. The Global Positioning System (GPS) is the most popular sensor used for mobile robots that move outdoors. Recently, for robots that use GPS, measurements were shown to be more accurate because position precision with GPS improved. However, GPS is not usable in places with ceilings such as under bridges and in tunnels. Techniques to supplement GPS include the stereo-camera methods [8] and structure from motion [9]. A number of image-processing techniques are already in common use [10,11,12]. They are effective methods because they do not need communication from the outside and can measure a position from only an image [13,14,15,16,17,18]. Image processing has been used to inspect infrastructures [19,20,21,22,23,24,25,26,27,28,29]. However, they have two problems: “occlusion” and the correspondence of some images. In plural images that are produced in parallax, it is necessary for the same point to be supported, but the corresponding point of image becomes difficult when any other images are not easily distinguishable from the measurement subject. A good method used for the corresponding point of image is “pattern matching”, but the drawback is that the shape must be simple. Alternatively, the stereo-camera method is used, where the device projects the light captured by the camera. However, the stereo camera method has a narrow position measurement area and is additionally vulnerable to occlusion. When cameras measure simple surfaces that have no feature points, such as a concrete wall, the precision of their measurement becomes low. An LRF (laser rangefinder) is effective at acquiring three-dimensional data over a wide range, but its cost is enormous for point group processing. LRF is a representative technique of a non-contact type measurement device [30]. LRF consists of a laser and cameras. The irradiation corners are changed via mirrors, and the laser is able to irradiate any one point on the measurement subject. The position of a point is coordinated in three dimensions, and irradiation of the measurement object is triangulated from the position of the laser, the irradiation corner and the position of the camera, and the angle is calculated in LRF. The characteristics of an object can be measured quickly in comparison with the contact type. However, a scanning mechanism is needed to move the irradiation point over the whole object, which is necessary to obtain three-dimensional shape information of the whole object because it is basically a point measurement. In addition, mirror surfaces coated with, for example, metal products theoretically have characteristics that cannot be measured. RGB-d cameras can measure the depth information using TOF (time-of-flight measurement) and optical images simultaneously, and the RGB-d camera can obtain a rich information to facilitate 3D modeling of the object to be measured. Many studies on self-positioning of mobile robots have been carried out using RGB-d cameras, in particular [31,32,33]. However, the principle for obtaining depth information is the same as that of LRF, and it is not suitable for precise measurement because it does not solve the problems of LRF as described above.
Therefore, we developed a system to sense the position and posture of infrastructure inspection robots using laser projection markers. Figure 1 shows the concept of our system. This can obtain the relative position and posture in the projection plane. In our previous study [34,35,36,37], we used a real marker, but in this study, we used a projection marker that can measure distant objects. Camera 2 photographs detailed pictures for inspection. Camera 1 photographs all objects and can know where the photographing area of camera 2 is because it obtains the position and posture of camera 2 from its projection markers. Because camera 1 teaches camera 2 the positions, camera 2 can photograph the whole target throughout. Camera 1 and camera 2 can obtain each other’s positions from the geometric relations of a laser marker and the camera by photographing each other’s laser markers. When camera 2 looks at this laser marker on the projection plate, we call this “subject view”, and when camera 1 looks at camera 2′s laser markers on the projection plate, we call this “objective view”. We try to obtain the position and posture of camera 1 from its objective view. A laser pointer that irradiates toward the photograph direction is equipped with a camera, and this camera is targeted to measure the relative position of the object by photographing a reflected marker. It uses the laser projection markers and camera, and it has very simple configuration. The camera photographs a target structure and projection laser markers, and the position and posture of the camera and mobile robot were calculated by the image application. This system does not have to install a landmark in the environment like the existing landmark method. RGB-d requires processing of the matching of optical and distance images, which can lead to errors in processing. On the other hand, our method is based only on the optical image of the target surface projected by the laser, so there is no matching between different data and no error in the internal factors. It can also reduce the amount of information required and speed up processing. The system makes infrastructure inspections more effective and decreases the time for inspection.

2. Measurement Method

2.1. The Measurement System Model Using Projection Laser Markers

A laser pointer that is irradiated toward the photograph direction was equipped with a camera, and this camera was targeted to measure the relative position of the object by photographing a reflected marker. Figure 2 shows the geometry of the measurement system using projection laser markers. Camera 1 photographs all objects and can photograph the projection laser markers of Camera 2. Each position and posture relationship for each camera and the laser beams are known. Each laser beam crossed the target projection plane and created the projection laser markers. Camera 1′s coordinate system Xc1-Yc1-Zc1 is a global coordinate system, and Camera 2′s coordinate system is Xc2-Yc2-Zc2. The coordinate system of the projection plate is Xpl-Ypl-Zpl. The position and posture of each local coordinate system are expressed by a translation, the turn of Camera 1′s coordinate system, and a global coordinate system. In this model, because the target projection plate is assumed to be endless, the origin of the plane’s coordinate system is on the z-axis of Camera 1′s coordinate system, the posture of the Zpl-axis circumference can be ignored.
The following parameters were calculated to obtain the position and posture of camera 2 from an image of camera 1.: Target projection plate: zpl-c1, θxpl, θypl, Relative positions of Camera 1 and Camera 2: xc2-c1, yc2-c1, zc2-c1, Posture of Camera 2: θxc2, θyc2, θzc2.
Subscript definitions of parameters are shown as follows. C i i = 1 , 2 : camera i, p l : Target projection plate, L A m   m = 1 5 : laser pointer m, n = 1 , 2 : number of any point on the laser beam, P R : projection point of the laser beam, p r o : projection point on the image plane. f is focus length. h is the size of the image sensor.
To obtain the relative position and posture between two cameras through the projection plate, we first obtain the relative position and posture between each camera and the projection plate, and then we obtain the relative position and posture between the cameras. In the first measurement by “Subject view”, the equation of the projection plane is derived from the coordinates of the three projection points of the laser pointer, and from each parameter of the equation, the relative positional attitude between the camera and the projection plane is obtained. In the following section, Section 2.2, the relative position and posture between Camera 1 and the projection plane are calculated from the coordinates in the virtual image plane of the projection marker irradiated by Camera 1 as input. In Measurement by “objective view”, Section 2.3, the relative positions and postures of two cameras relative to each other are obtained from the relationship between the two cameras and the plane.

2.2. Measurement Position of the Projection Laser Marker from the Camera Image: ‘Subject View’

In this section, we will determine the relative position and posture between a camera and the target projection plate. Our final aim is to construct an objective view model to measure the relative position and posture of each camera from projection laser markers on the target projection plate. However, we first examine the subject view model, which measures the position of the projection laser marker from the camera image, before the objective view model to realize this. In the subject view measurement, there is one camera, Camera 1, and it has three laser pointers. The geometric relationship between the camera and laser pointers is known, and three laser pointers were projected on the target projection plate to measure the relative position and posture between the camera and target projection plate. The position of the target projection plate was calculated from coordinates of three projection laser markers on the target projection plate. In this paper, the axis of the three laser pointers is parallel to the optical axis of the camera. Figure 3 shows a geometric model of the subject view. The relative position and posture between a camera and the target projection plate is calculated from position coordinates of the three projection laser markers of the camera.
Transformation from the image plane of Camera 1 to Camera 2 is expressed as follows using a vector of the point on a laser beam of camera 1, a position vector of projection point.
p c 1 P R 1 m c 1 = x L A 1 m n c 1 y L A 1 m n c 1 f · x L A 1 m n c 1 h · u P R p r o 1 m c 1 = x L A 1 m n c 1 y L A 1 m n c 1 z L A 1 m n c 1
To obtain the relative position and posture between a camera and the target projection plate, equations of the target projection plate are determined using position coordinates of the three laser points on the target projection plate. We find an outer product from these two direction vector coordinates that were expressed in the camera coordinate system of Camera 1. The vector provided by the outer product of the direction vector is a normal aspect vector. Therefore, the values of this normal vector are the parameters of the equation for the target projection plate. Using the magnitude of a normal vector n P R , a unit vector of the target projection plate’s normal vector is expressed as follows:
n P R n P R = 1 a P R 2 + b P R 2 + c P R 2 a P R b P R c P R = x n e P R y n e P R z n e P R
Relative posture θ x p l , θ y p l is expressed as follows using the previous equation:
θ x p l = arcsin y n e P R cos θ y p l ,   θ y p l = arcsin x n e P R

2.3. Measurement Projection Laser Marker—Another Camera: ‘Objective View’

In this section, we will determine the relative position and posture between Camera 1 and Camera 2 using the relative position and posture between the camera and the target projection plate. A geometric model of objective view is shown in Figure 2. Camera 2 has 5 laser pointers that have known geometric relationships with Camera 2, and the laser pointers project laser markers on the target projection plate to obtain the relative position and posture between Camera 1 and Camera 2. Three of the five laser pointers are parallel to the optical axis of Camera 2, and the other two are not parallel and have different postures. We obtain the relative position and posture between Camera 1 and Camera 2 by expressing the position of Camera 2′s laser markers on the coordinate of Camera 1′s image plane. There is a relationship between marker distance and the posture of Camera 2. The distance between laser markers m = 1–4, which are parallel to the optical axis of Camera 2, is the same values when the posture of Camera 2 has the same positive/negative angle. The distance between laser markers m = 5 or 6 and any other laser markers are not the same values when the posture of Camera 2 has the same positive/negative angle. In the case that the angle of Camera 2 was positive, θxc2a and θyc2 were opposite sign. In the case that the angle of Camera 2 was negative, θxc2a and θyc2 were the same sign. Six parameters are necessary to measure the relative position and posture between Camera 1 and Camera 2 by objective view. So we used three laser pointers that were parallel to the optical axis of Camera 2 as well as two laser pointers that were not parallel to the optical axis of Camera 2 and had different angles to each other. Three parallel pointers were aligned to the optical axis in the first, second and fourth quadrants of the camera coordinates of Camera 2. These pointers are set to be symmetrical in the axis of the camera coordinates of Camera 2. Two laser pointers with varying vectors an any degree of leaning were installed in the arbitrary position of the camera coordinate system of Camera 2. Camera 1 photographs the projected laser markers of camera 2, and the coordinates of the projection laser markers are expressed in the image plane coordinate system of Camera 1. The coordinates of the projection laser markers expressed in the image plane coordinate system of Camera 1 are converted into the coordinate system of Camera 1. θzc2 is obtained from the angles of laser markers m = 1 and m = 3. The angle θzc2 between the line of projection laser markers m = 1 and m = 3 and the axis of xc1 is shown by the following equation:
θ z c 2 = arctan x P R 2 1 c 1 x P R 2 3 c 1 y P R 2 1 c 1 y P R 2 3 c 1
The values from multiplying the coordinate level of the projection markers with a backlashing line of θzc2 are converted into the coordinates of the projection markers, which does not include the posture information θzc2 of Camera 2. Using the coordinates of the projection markers, the posture of Camera 1 is obtained from the distance between the projection laser markers. θxc2 is obtained from the distance d c 1 P R 2 m m c 1 between the projection laser markers m = 1 and m = 3, and θyc2 is obtained from the distance between the projection laser markers m = 1 and m = 2. d c 1 P R 2 m m c 1 is the distance between the projection laser markers when the posture of Camera 2 is the same as the posture of Camera 1. θxc2 and θyc2 are shown by the following equations:
θ x c 2 = arctan d c 1 P R 2 1 3 c 1 d c 1 P R 2 1 3 c 1
θ y c 2 = arctan d c 1 P R 2 1 2 c 1 d c 1 P R 2 1 2 c 1 2 y P R 2 2 c 1 y P R 2 1 c 1 2
Postures θxc2 and θyc2 are confirmed whether they are opposite signs or the same signs as the angle of the projection laser markers m = 1 and m = 2. If it is an opposite sign it will be a negative number, and if it is the same sign it will be a positive number. Using projection markers m = 1 – m = 3 irradiated parallel to the camera’s optical axis, an imaginary projection point m = 4 is needed for parallel irradiation to the camera optical axis and is located in the third quadrant of Camera 2. Because four laser pointers m = 1 – 4 were installed symmetrically in the axis of the camera coordinate system, the intersection point of the diagonal rectangle of the marker that a parallel laser beam reflects to the camera’s optical axis agrees with the camera’s optical axis, and a point intersection in the plane does not depend on the change of the posture. Therefore, the intersection point of the straight line needs to be provided from the coordinates of the projection markers m = 1 and 4 and the coordinates of the projection markers m = 2 and 3. Using the camera optical axis and the point of intersection provided, and because the intersection point is normal to the plane, positions x c 2 c 1 and y c 2 c 1 of Camera 2 are determined from the posture and z c 2 c 1 of normal and calculated Camera 2. The intersection of diagonal lines of the projection laser markers is the intersection of the optical axis of Camera 1 and the target projection plate. A normal vector for the intersection of the optical axis of Camera 2 and the target projection plate is expressed when the posture of Camera 2 changed. Then, the desired position, x c 2 c 1 and y c 2 c 1 , are shown by the following equation:
x c 2 c 1 y c 2 c 1 = ( I n t e r c e p t m = 1 , m = 4 I n t e r c e p t m = 2 , m = 3 i n c l i n a t i o n m = 1 , m = 4 i n c l i n a t i o n m = 2 , m = 3 x A c 1 i n c l i n a t i o n m = 1 , m = 4 · I n t e r c e p t m = 1 , m = 4 I n t e r c e p t m = 2 , m = 3 i n c l i n a t i o n m = 1 , m = 4 i n c l i n a t i o n m = 2 , m = 3 + I n t e r c e p t m = 1 , m = 4 y A c 1 )
Where
i n c l i n a t i o n m = 1 , m = 4 = y P R 2 1 c 1 y P R 2 4 c 1 x P R 2 1 c 1 x P R 2 4 c 1 i n c l i n a t i o n m = 2 , m = 3 = y P R 2 3 c 1 y P R 2 2 c 1 x P R 2 3 c 1 x P R 2 2 c 1 I n t e r c e p t m = 1 , m = 4 = y P R 2 1 c 1 i n c l i n a t i o n m = 1 , m = 4 · x P R 2 1 c 1 I n t e r c e p t m = 2 , m = 3 = y P R 2 3 c 1 i n c l i n a t i o n m = 2 , m = 3 · x P R 2 3 c 1
z c 2 c 1 is obtained from in the same way as in the above equation. The posture of Camera 2 is narrowed down to two patterns. A projection marker coordinate is calculated using the results of two simulation patterns. The distance between the markers of m = 5 and m = 6 is determined from the coordinates by simulation. From the above equations, our method results in a larger displacement of the laser projection points on the projection plane due to changes in position and posture as the laser light spacing becomes longer. This feature is not present in other methods, such as RGB-d sensors, where the accuracy of the position and posture to be measured is improved by increasing the distance between the laser beams.

3. Measurement System

Figure 4 shows the entire measurement system. This system consists of Camera 1, Camera 2 and the laser pointers. Each camera was set on grid paper. The laser-marking device and the grid paper could make the position of the camera highly precise. Figure 5 shows the measurement unit of Camera 1, and Figure 6 shows the measurement unit of Camera 2. Each unit consists of a camera, laser pointers, mounts for the laser pointer, tilt stages for the laser pointer, a rotary stage for the unit and a gonio stage for the unit. Red (650 nm) and green (532 nm) color laser pointers were used to distinguish individual laser markers. They were set on the tilt stages. The tilt stage adjusts the laser pointer angle to collimate the laser beam and the optical axis of camera. One of Camera 2′s laser pointers was set to not collimate laser beams and the optical axis of Camera 2.

4. Relative Position and Posture Measurement by Subject View

4.1. Relative Position Measurement by Subject View

This experiment was to verify the effectiveness of measurement position of the projection target plate from the camera image using subject view. Figure 7 shows an image of the measurement by subject view. Three laser markers were irradiated on the target projection plate in parallel with the optical axis of Camera 1. The relative position and posture between Camera 1 and the target projection plate were calculated from the position of laser markers on the target projection plate. Camera 1 was confronted with the target projection plate. The distance between Camera 1 and the target projection plate was changed from 1000 mm to 7000 mm every 1000 mm along the zc1 axis.
Figure 8 shows position measurement results by subject view. The maximum position error was 2.8%. This system could measure positions with an error less than 50 mm in the range of shooting distance from 1000 to 4000 mm. However, the measurement position error in the range of shooting distance from 5000 to 7000 mm was over 50 mm because as the measurement distance increased, the image resolution per pixel decreased.

4.2. Relative Posture Measurement by Subject View

This experiment was to verify the effectiveness of measurement posture of the projection target plate from the camera image using subject view. Figure 9 shows an image of the measurement by subject view. xc1 axis was changed by a goniometer stage, and yc1 axis was changed by a rotary stage. In the relative posture measurement, the posture of the projection laser marker from the camera image was measured when only the xc1 axis was changed, only the yc1 axis was changed and when both axes were changed. The distance between Camera 1 and the projection target plate was 300 mm along the zc1 axis. Each axis was changed from −20 degrees to +20 degrees every 5 degrees.
Figure 10 shows the posture measurement results by subject view. The maximum error was 0.5 degrees when θxc1 changed, as shown in Figure 10a, and it was able to measure posture with sufficient accuracy. The maximum error was 6.6 degrees when θyc1 changed, as shown in Figure 10b, and it was sufficiently accurate less than 300 mm. The maximum error around the xc1 axis was 2.6 degrees when θxc1 and θyc1 changed, and posture was measured with sufficient accuracy. However, the maximum error around the yc1 axis was 11.7 degrees, which was caused by an installation error of the target projection plate and all laser markers not being completely parallel.

5. Relative Position and Posture Measurement by Objective View

5.1. Relative Position Measurement by Objective View

This experiment was to verify the effectiveness of the measurement position of Camera 2 from Camera 1′s image using the objective view. The relative position and posture between Camera 1 and Camera 2 were obtained by measuring the laser markers of Camera 2 projected on the target projection plate. Figure 11 shows an image of the measurement by objective view. Camera 2 had five laser markers: three laser markers were parallel to the optical axis of Camera 2, and the other two laser markers were not parallel to each other. Camera 1 had three laser markers that were parallel to the optical axis of camera 1, as described previously in chapter 4. Camera 1 and Camera 2 were confronted with the target projection plate. The distance between Camera 1 and the target projection plate was changed from 1800 mm along the zc1 axis. The position of Camera 2 was changed with the coordinates of Camera 1: (−300, −35, 300), (−300, −35, 600), (−200, −35, 300), (−200, −35, 600), (200, −35, 300), (200, −35, 600), (300, −35, 300), (300, −35, 600).
Figure 12 shows position measurement results by objective view. The maximum position error of x was 2.5 mm, the maximum position error of y was 19 mm and the maximum position error of z was 32 mm. In these experiment, when the measurement distance is within 1800 mm, the measurement error for the measurement distance is less than 1.8%. The position errors of y and z were larger than position error of x, and this was caused by errors in setting the posture of laser markers.

5.2. Relative Posture Measurements by Objective View

This experiment was to verify the effectiveness of the measurement posture of Camera 2 from Camera 1′s image using the objective view. Figure 13 shows an image of the measurement by objective view. For Camera 2, the xc2 axis and zc2 axis were changed by a 2-axis goniometer stage, and the yc2 axis was changed by a rotary stage. In the relative posture measurement, the projection laser marker posture of Camera 2 from Camera 1′s image was measured when only the xc2 axis was changed, only the yc2 axis was changed, only the zc2 axis was changed, when two axes were changed and when all three axes were changed. The distance between Camera 1 and the projection target plate was 1900 mm along the zc1 axis. The position of Camera 2 was changed with the coordinates of Camera 1: (−400, −35, 1200), (−400, −35, 1400), (−200, −35, −100), (−400, −35, 1500). Each axis of camera 2 was changed from 0 degrees to 60 degrees every 5 degrees.
Figure 14 shows posture measurement results by objective view. From Figure 14a, the maximum error of θxc2 was 12.5 degrees, the maximum error of θyc2 was 4 degrees and the maximum error of θzc2 was 1.7 degrees when only the axis changed. In these experiment, when the measurement distance is within 1900 mm, the measurement error of θyc2 and θzc2 for the measurement distance is less than 6.7%. Figure 14b shows the relationship between the posture of Camera 2 and the distance of the laser markers. The distance between the laser markers increased exponentially with the change of posture. Because the distance of the laser markers of Camera 2 was not long enough, the error for θxc2 became large. Figure 14c shows the measurement posture when two axes changed, and Figure 14d shows it when three axes changed. The maximum error of θxc2 was 3.9 degrees and the maximum error of θyc2 was 3.8 degrees when θxc2 and θyc2 changed. The maximum error of θxc2 was 3.8 degrees, the maximum error of θyc2 was 5.1 degrees and the maximum error of θzc2 was 2.4 degrees when θxc2 and θyc2 changed. They were sufficiently accurate.

6. Conclusions

In this paper, we developed a system to sense the position and posture of a mobile robot. It used laser projection markers and cameras, and it has a very simple configuration. The camera photographed a target structure and projection laser markers, and the position and posture of the camera and mobile robot were calculated by an image application. We examined the number of necessary laser markers, and we verified this method by experiment. The study results are summarized below.
(1)
In the position measurement by subject view, the maximum position error was 2.8%. This system could measure position with an error less than 50 mm in the range of shooting distance from 1000 mm to 4000 mm.
(2)
In the posture measurement by subject view, the maximum error was 0.5 degrees when θxc1 changed, the maximum error was 6.6 degrees when θyc1 changed and the maximum error around the xc1 axis was 2.6 degrees when θxc1 and θyc1 changed. They were sufficiently accurate for less than 300 mm.
(3)
In the position measurement by objective view, the maximum position error of x was 2.5 mm, the maximum position error of y was 19 mm and the maximum position error of z was 32 mm. They were sufficiently accurate for less than 1800 mm.
(4)
In the posture measurement by objective view, the maximum error of θxc2 was 12.5 degrees, the maximum error of θyc2 was 4 degrees and the maximum error of θzc2 was 1.7 degrees when only the axis changed. θyc2 and θzc2 had sufficient accuracies less than 1900 mm. The maximum error of θxc2 was 3.9 degrees and the maximum error of θyc2 was 3.8 degrees when θxc2 and θyc2 changed. The maximum error of θxc2 was 3.8 degrees, the maximum error of θyc2 was 5.1 degrees and the maximum error of θzc2 was 2.4 degrees when θxc2 and θyc2 changed. They were sufficiently accurate.
(5)
A relationship was found between the posture of Camera 2 and distance of laser markers. The distance between laser markers increased exponentially as the posture changed. Because the distance of laser markers of Camera 2 was not long enough, the error for θxc2 became large.

Author Contributions

T.S. (Tohru Sasaki) and R.S. were responsible for conceptualization and methodology, T.S. (Toshitaka Sakai) carried out all the software development, S.K. and T.N. supervised the experimental site, K.T. and M.J. supervised, reviewed and edited. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by corporate funding from R&D of Sato Tekko Co., Ltd. (3190 Nakaniikawa-gun, Tateyama-machi, Toyama 930-0293, Japan).

Acknowledgments

I would like to express my gratitude to the engineers of Sato Tekko Co., Ltd.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Matsumura, E. Present aspects on inspection and diagnosis of concrete structures. Concr. J. 2001, 39, 8–15. [Google Scholar] [CrossRef]
  2. Fujii, M. Present state and prospect on management of concrete structures. Concr. J. 1990, 28, 12–21. [Google Scholar] [CrossRef]
  3. Fujita, Y.; Nakamura, H.; Hamamoto, Y. Automatic and exact crack extraction from concrete surfaces using image processing techniques. J. Jpn. Soc. Civ. Eng. 2010, 66, 459–470. [Google Scholar] [CrossRef]
  4. Tomiyama, S.; Ohira, N.; Takahashi, S.; Nakamura, H. Image denoising on DoG image-based crack detection method. Inst. Electron. Inf. Commun. Eng. 2011, 110, 207–212. [Google Scholar]
  5. Dorafshan, S.; Maguire, M. Bridge inspection: Human performance, unmanned aerial systems and automation. J. Civ. Struct. Health Monit. 2018, 8, 443–476. [Google Scholar] [CrossRef] [Green Version]
  6. Pereira, F.C.; Pereira, C.E. Embedded image processing systems for automatic recognition of cracks using UAVs. IFAC-PapersOnLine 2015, 48, 16–21. [Google Scholar] [CrossRef]
  7. Ito, A.; Aoki, Y.; Hashimoto, S. Accurate extraction and measurement of fine cracks from concrete block surface image. In Proceedings of the IEEE 28th Annual Conference of the Industrial Electronics Society, Sevilla, Spain, 5–8 November 2002. [Google Scholar]
  8. Isobe, Y.; Masuyama, G.; Umeda, K. Target tracking for a mobile robot with a stereo camera considering illumination changes. In Proceedings of the IEEE/SICE International Symposium on System Integration, Nagoya, Japan, 11–13 December 2015; pp. 702–707. [Google Scholar]
  9. Kawanishi, R.; Yamashita, A.; Kaneko, T.; Asama, H. Parallel line-based structure from motion by using omnidirectional camera in textureless scene. Adv. Robot. 2013, 27, 19–32. [Google Scholar] [CrossRef]
  10. Singh, T.R.; Roy, S.; Singh, O.I.; Sinam, T.; Singh, K.M. A new local adaptive thresholding technique in binarization. Int. J. Comput. Sci. Issues 2011, 8, 271–277. [Google Scholar]
  11. Ragulskis, M.; Aleksa, A. Image hiding based on time-averaging moiré. Opt. Commun. 2009, 282, 2752–2759. [Google Scholar] [CrossRef]
  12. Oliveira, H.; Correia, P.L. Automatic road crack segmentation using entropy and image dynamic thresholding. In Proceedings of the European Signal Processing Conference, Glasgow, UK, 24–28 August 2009; pp. 622–626. [Google Scholar]
  13. Ubukata, T.; Terabayashi, K.; Mora, A.; Kawashita, T.; Masuya, G.; Umeda, K. Fast human detection combining range image segmentation and local feature based detection. In Proceedings of the International Conference on Pattern Recognition, Stockholm, Sweden, 24–28 August 2014; pp. 4281–4286. [Google Scholar]
  14. Zhou, F.; Peng, B.; Cui, Y.; Wang, Y.; Tan, H. A novel laser vision sensor for omnidirectional 3D measurement. Opt. Laser Technol. 2013, 45, 1–12. [Google Scholar] [CrossRef]
  15. Yagi, Y.; Nagai, H.; Yamazawa, K.; Yachida, M. Reactive visual navigation based on omnidirectional sensing—Path following and collision avoidance. J. Intell. Robot. Syst. 2001, 31, 379–395. [Google Scholar] [CrossRef]
  16. Onoe, Y.; Yamazawa, K.; Takemura, H.; Yokoya, N. Telepresence by real-time view-dependent image generation from omnidirectional video streams. Comput. Vis. Image Underst. 1998, 71, 154–165. [Google Scholar] [CrossRef] [Green Version]
  17. Do, H.N.; Jadaliha, M.; Choi, J.; Lim, C.Y. Feature selection for position estimation using an omnidirectional camera. Image Vis. Comput. 2015, 39, 1–9. [Google Scholar] [CrossRef] [Green Version]
  18. Urban, S.; Leitloff, J.; Hinz, S. Improved wide-angle, fisheye and omnidirectional camera calibration. ISPRS J. Photogramm. Remote Sens. 2015, 108, 72–79. [Google Scholar] [CrossRef]
  19. Yamaguchi, T.; Nakamura, S.; Saegusa, R.; Hashimoto, S. Image-based crack detection for real concrete surfaces. IEEJ Trans. Electr. Electron. Eng. 2008, 3, 128–135. [Google Scholar] [CrossRef]
  20. Hashmi, M.F.; Keskar, A.G. Computer-vision based visual inspection and crack detection of railroad tracks. Available online: http://www.inase.org/library/2014/venice/bypaper/OLA/OLA-16.pdf (accessed on 7 April 2020).
  21. Zidek, K.; Hosovsky, A. Image thresholding and contour detection with dynamic background selection for inspection tasks in machine vision. Int. J. Circuits. Syst. Signal Process. 2014, 8, 545–554. [Google Scholar]
  22. Prasanna, P.; Dana, K.; Gucunski, N.; Basily, B. Computer vision based crack detection and analysis. Proc. SPIE 2012, 8345, 834542. [Google Scholar]
  23. Cho, S.H.; Hisatomi, K.; Hashimoto, S. Cracks and displacement feature extraction of the concrete block surface. In IAPR Workshop on Machine Vision Applications; Fujitsu Makuhari System Laboratory: Tokyo, Japan, 1998; pp. 246–249. [Google Scholar]
  24. Adhikari, R.S.; Moselhi, O.; Bagchi, A. Image-based retrieval of concrete crack properties for bridge inspection. Autom. Constr. 2014, 39, 180–194. [Google Scholar] [CrossRef]
  25. Zalama, E.; Bermejo, J.G.G.; Medina, R.; Llamas, J. Road crack detection using visual features extracted by gabor filters. Comput-Aided Civ. Infrastruct. Eng. 2014, 29, 342–358. [Google Scholar] [CrossRef]
  26. Alam, S.Y.; Loukili, A.; Grondin, F.; Rozière, E. Use of the digital image correlation and acoustic emission technique to study the effect of structural size on cracking of reinforced concrete. Eng. Fract. Mech. 2015, 143, 17–31. [Google Scholar] [CrossRef]
  27. Iyer, S.; Sinha, S.K. A robust approach for automatic detection and segmentation of cracks in underground pipeline images. Image Vis. Comput. 2005, 23, 921–933. [Google Scholar] [CrossRef]
  28. Su, T.C. Application of computer vision to crack detection of concrete structure. Int. J. Eng. Technol. 2013, 5, 457–461. [Google Scholar] [CrossRef] [Green Version]
  29. Zou, Q.; Cao, Y.; Li, Q.; Mao, Q.; Wang, S. Crack tree: Automatic crack detection from pavement images. Pattern Recognit. Lett. 2012, 33, 227–238. [Google Scholar] [CrossRef]
  30. Mozos, O.M.; Kurazume, R.; Hasegawa, T. Categorization of indoor places using the Kinect sensor. Sensors 2012, 12, 6695–6711. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  31. Hafnera, F.M.; Bhuiyanb, A.; Kooij, J.F.P.; Granger, E. RGB-depth cross-modal person re-identification. In Proceedings of the 16th IEEE International Conference on Advanced Video and Signal Based Surveillance 2019, Taipei, Taiwan, 18–21 September 2019; p. 8909838. [Google Scholar]
  32. Henry, P.; Krainin, M.; Herbst, E.; Ren, X.; Fox, D. RGB-d mapping: Using depth cameras for dense 3D modeling of indoor environments. In Experimental Robotics; Springer: Berlin/Heidelberg, Germany, 2014; pp. 477–491. [Google Scholar]
  33. Ren, J.; Gong, X.; Yu, L.; Zhou, W.; Yang, M.Y. Exploiting global priors for RGB-d saliency detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Boston, MA, USA, 7–12 June 2015; pp. 25–32. [Google Scholar]
  34. Sasaki, T.; Sakai, Y.; Obara, H. Local position measurement of moving robots system using stereo camera method. J. Jpn. Soc. Precis. Eng. 2011, 77, 490–494. [Google Scholar] [CrossRef]
  35. Sasaki, T.; Ushimaru, T.; Yamatani, T.; Ikemoto, Y.; Obara, H. Pivot turning measurement of relative position and posture for mobile robot system using stereo camera. Key Eng. Mater. 2012, 523–524, 895–900. [Google Scholar] [CrossRef]
  36. Sasaki, T.; Ushimaru, T.; Yamatani, T.; Ikemoto, Y. Local position measurement of moving robots system using stereo measurement on the uneven floor. J. Jpn. Soc. Precis. Eng. 2013, 79, 937–942. [Google Scholar] [CrossRef] [Green Version]
  37. Hatakeyama, N.; Sasaki, T.; Terabayashi, K.; Funato, M.; Jindai, M. Position and posture measurement method of the omnidirectional camera using identification markers. J. Robot. Mechatron. 2018, 30, 354–362. [Google Scholar] [CrossRef]
Figure 1. Concept image of the proposed system.
Figure 1. Concept image of the proposed system.
Electronics 09 00807 g001
Figure 2. A model of the measurement system.
Figure 2. A model of the measurement system.
Electronics 09 00807 g002
Figure 3. Geometric model of the subject view.
Figure 3. Geometric model of the subject view.
Electronics 09 00807 g003
Figure 4. The entire measurement system.
Figure 4. The entire measurement system.
Electronics 09 00807 g004
Figure 5. (a) Front view; (b) side view of the measurement unit of Camera 1.
Figure 5. (a) Front view; (b) side view of the measurement unit of Camera 1.
Electronics 09 00807 g005
Figure 6. (a) Front view; (b) Side view of the measurement unit of camera 2. Camera unit: Imaginsource/DFK23UP031, CMOS color, 2592 × 1944 pixels, 15 fps. Lens: VS-Technology/SV-H Series SV-7525H, f = 12 mm, 31.2 × 40.8 deg. (2/3″).
Figure 6. (a) Front view; (b) Side view of the measurement unit of camera 2. Camera unit: Imaginsource/DFK23UP031, CMOS color, 2592 × 1944 pixels, 15 fps. Lens: VS-Technology/SV-H Series SV-7525H, f = 12 mm, 31.2 × 40.8 deg. (2/3″).
Electronics 09 00807 g006
Figure 7. Experiment image of position measurement by subject view.
Figure 7. Experiment image of position measurement by subject view.
Electronics 09 00807 g007
Figure 8. Position measurement result by subject view.
Figure 8. Position measurement result by subject view.
Electronics 09 00807 g008
Figure 9. Image of posture measurement by subject view.
Figure 9. Image of posture measurement by subject view.
Electronics 09 00807 g009
Figure 10. (a) θxc1 changed; (b) θyc1 changed; posture measurement result by subject view.
Figure 10. (a) θxc1 changed; (b) θyc1 changed; posture measurement result by subject view.
Electronics 09 00807 g010
Figure 11. Experiment image of position measurement by objective view.
Figure 11. Experiment image of position measurement by objective view.
Electronics 09 00807 g011
Figure 12. Position measurement results by objective view.
Figure 12. Position measurement results by objective view.
Electronics 09 00807 g012
Figure 13. Image of posture measurement by objective view.
Figure 13. Image of posture measurement by objective view.
Electronics 09 00807 g013
Figure 14. (a) θzc2 changed; (b) relation between posture and distance of laser markers; (c) θxc2 and θyc2 changed; (d) Three axes (θxc2, θyc2 and θzc2) changed; measurement results by objective view.
Figure 14. (a) θzc2 changed; (b) relation between posture and distance of laser markers; (c) θxc2 and θyc2 changed; (d) Three axes (θxc2, θyc2 and θzc2) changed; measurement results by objective view.
Electronics 09 00807 g014

Share and Cite

MDPI and ACS Style

Sasaki, T.; Shioya, R.; Sakai, T.; Kinoshita, S.; Nojiri, T.; Terabayashi, K.; Jindai, M. Position and Posture Measurements Using Laser Projection Markers for Infrastructure Inspection. Electronics 2020, 9, 807. https://doi.org/10.3390/electronics9050807

AMA Style

Sasaki T, Shioya R, Sakai T, Kinoshita S, Nojiri T, Terabayashi K, Jindai M. Position and Posture Measurements Using Laser Projection Markers for Infrastructure Inspection. Electronics. 2020; 9(5):807. https://doi.org/10.3390/electronics9050807

Chicago/Turabian Style

Sasaki, Tohru, Ryo Shioya, Toshitaka Sakai, Shunki Kinoshita, Takahito Nojiri, Kenji Terabayashi, and Mitsuru Jindai. 2020. "Position and Posture Measurements Using Laser Projection Markers for Infrastructure Inspection" Electronics 9, no. 5: 807. https://doi.org/10.3390/electronics9050807

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop