Study on Image Correction and Optimization of Mounting Positions of Dual Cameras for Vehicle Test

: Among surrounding information-gathering devices, cameras are the most accessible and widely used in autonomous vehicles. In particular, stereo cameras are employed in academic as well as practical applications. In this study, commonly used webcams are mounted on a vehicle in a dual-camera conﬁguration and used to perform lane detection based on image correction. The height, baseline, and angle were considered as variables for optimizing the mounting positions of the cameras. Then, a theoretical equation was proposed for the measurement of the distance to the object, and it was validated via vehicle tests. The optimal height, baseline, and angle of the mounting position of the dual camera conﬁguration were identiﬁed to be 40 cm, 30 cm, and 12 ◦ , respectively. These values were utilized to compare the performances of vehicles in stationary and driving states on straight and curved roads, as obtained by vehicle tests and theoretical calculations. The comparison revealed the maximum error rates in the stationary and driving states on a straight road to be 3.54% and 5.35%, respectively, and those on a curved road to be 9.13% and 9.40%, respectively. It was determined that the proposed method is reliable because the error rates were less than 10%.


Introduction
Driving automation, as defined by the Society of Automotive Engineers (SAE), is the internationally accepted standard. SAE provides taxonomy with detailed definitions for six levels of driving automation. These range from no driving automation (Level 0) to full driving automation (Level 5) [1]. Mass-produced vehicles have recently begun to be generally equipped with Level 2 autonomous driving technology. This technology provides drivers with partial driving automation and is called advanced driver assistance systems (ADAS). Among the examples of ADAS, adaptive cruise control (ACC) and lane-keeping assist system (LKAS) are Level 1 technologies, and highway driving assist (HDA) is a Level 2 technology.
The primary goal of autonomous driving technology is to respond proactively to unanticipated scenarios, such as traffic accidents and construction sites. This requires rapid and effective identification of the environment surrounding the vehicle. To achieve this, various sensors, such as light detection and ranging (LiDAR) and radar sensors, and cameras are used for detection [2]. Among these, cameras capture images containing a large quantity of information. This information enables object detection, traffic information collection, lane detection, among other tasks. Furthermore, cameras are more readily accessible compared to other sensors. Therefore, several studies have been conducted on the camera-based collection of environmental information and its processing.
With regard to the correction of camera images, Lee et al. [3] proposed a method to correct the radial distortion caused by camera lenses. In addition, Detchev et al. [4] proposed a method for simultaneously estimating the internal and relative direction parameters for calibrating measurement systems comprising multiple cameras.

1.
Introduction of the lane detection algorithm.

2.
Input image distortion correction, stereo rectification, and focal length correction.

3.
Experimental evaluation of algorithm precision according to three variables for optimal dual-camera positioning. 4.
Proposal of equations for calculating the distance between the vehicle and objects in front of it in straight and curved roads for test evaluation.

5.
Applicability evaluation through real vehicle tests using the optimal camera position determined in step 3 and the distance calculation equation proposed in step 4. The ROI in a camera-captured image is the region containing the information relevant to the task at hand. As the range of the scenery captured by a camera affixed within a vehicle remains constant, the ROI in particular images must be obtained by removing the corresponding irrelevant regions.

Theoretical Background for Dual Camera-Based Image Correction and the Proposed
Cameras usually capture images in the red, green, and blue (RGB) format, which comprises three channels. Grayscale conversion of such images produces monochromatic images, which comprise a single channel. As images converted via this method retain only brightness information, the amount of data to be processed is reduced by two-thirds, increasing the computational speed.
The canny edge detector is an edge detection algorithm that utilizes successive steps such as noise reduction, determination of the intensity gradient of the image, non-maximum suppression, and hysteresis thresholding [27]. Owing to its multi-step mechanism, it performs better than the methods that use differential operators (e.g., the Sobel mask).
The Hough transform is a method for transforming components in the Cartesian coordinate system to those in the parameter space [28]. Straight lines and points on the Cartesian coordinate system are represented by points and straight lines, respectively, in the parameter space. Thus, points of intersection between straight lines in the parameter space can be used to search for straight lines passing through a given set of points in the Cartesian coordinate system.
The hue, saturation, value (HSV) format is a color model that represents an image in terms of hue, saturation, and value. It is particularly effective for the facile expression of desired colors because its operational template agrees with the human mode of color recognition. Perspective transform facilitates the modeling of homography using a 3 × 3 transformation matrix. The perspective of any image can be removed via a geometric processing method by relocating the pixels of the image.
The sliding window method uses a sub-level array of a certain size called a window and reduces the computational load for calculating the elements in each window in the entire array by reusing (rather than discarding) redundant elements.
The curve fitting method involves fitting a function to a given curve representing the input data. A polynomial function is most commonly used for this purpose. Furthermore, the input data can be approximated using a quadratic function by employing the leastsquares approximation method. The hue, saturation, value (HSV) format is a color model that represents an image in terms of hue, saturation, and value. It is particularly effective for the facile expression of desired colors because its operational template agrees with the human mode of color recognition.

Lane Detection Algorithm
Perspective transform facilitates the modeling of homography using a 3 × 3 transformation matrix. The perspective of any image can be removed via a geometric processing method by relocating the pixels of the image.
The sliding window method uses a sub-level array of a certain size called a window and reduces the computational load for calculating the elements in each window in the entire array by reusing (rather than discarding) redundant elements.
The curve fitting method involves fitting a function to a given curve representing the input data. A polynomial function is most commonly used for this purpose. Furthermore, the input data can be approximated using a quadratic function by employing the leastsquares approximation method.        The single-channel image was generated by averaging the pixel values corresponding to the R, G, and B channels. Figure 2d depicts the result of edge detection. A Gaussian filter was used to remove the noise, and the Canny edge detector was used to generate this edge-detected image. Then straight lines corresponding to the lane marks were obtained, as depicted in Figure 2e. The Hough transform was used to detect the edge components in the edge-detected image. Subsequently, straight lines corresponding to gradients with magnitudes of at most 5 • were removed, resulting in the elimination of horizontal and vertical lines that were unlikely to correspond to the lane.

Lane Detection Algorithm
The yellow pixels were extracted from Figure 2b, and the result is depicted in Figure 2f. Following the conversion of the image from the RGB format to the HSV format, a yellow color range was selected. When the ranges of the hue channel, saturation channel, and value channel were normalized to the interval 0-1, the yellow pixels corresponded to values between 0-0.1, 0.9-1, and 0.12-1 for hue and saturation channels. One-third of the mean brightness of the image was used for the value channel. When the value of the pixel was within the range that was set, the value was set to 255; otherwise, it was set to zero. Figure 2g depicts a combination of the image obtained by extracting straight lines to identify pixels corresponding to the lane candidates and that obtained by extracting the color. A combination was obtained by assigning weights of 0.8 and 0.2 to the images in Figure 2c,d, respectively.
Further, the lane candidates were obtained using the sliding window method after removing the perspective from the image presented in Figure 2g, and the output is depicted in Figure 2h. The image was captured in advance such that the optical axis of the camera was parallel to the road when the vehicle was located in the center of the straight road. The image can be warped so that the left and right lanes on a straight road are parallel. The coordinates (765, 246), (1240, 246), (1910,426), and (5, 516) of the four points on the set of lanes visible within the ROI were relocated to the points (300, 648), (300, 0), (780, 0), and (780, 648) in the warped image to align these along straight lines. A square window comprising 54 pixels was selected, with a width and height that were one-twentieth and one-sixth, respectively, of those of the image. The window with the largest pixel sum was then identified via the sliding window method.
Subsequently, a lane curve was generated by fitting a quadratic curve to the pixels of the lane candidate, as depicted in Figure 2i. The quadratic curve fitting is performed using the least-squares method, and the positions of the pixels of the lane candidate are indicated by the six windows on the left and right lane marks in Figure 2h.
Finally, lane detection based on the input image was completed. The final result, obtained by applying the lane curve to the input image via perspective transform, is depicted in Figure 2j.

Image Distortion Correction
Images captured by cameras exhibit radial distortions due to the refractive indices of convex lenses and tangential distortions due to the horizontal leveling problem inherent to the manufacturing process of lenses and image sensors. Circular distortions induced by radial distortion at the edge of the image and elliptical distortions induced by the tangential distortion require correction. The values of pixels in the distorted image can be used as the values of the corresponding pixels in the corrected image by distorting the coordinates of each pixel in the image [29].
In this study, OpenCV's built-in functions for checkerboard pattern identification, corner point identification, and camera calibration were adopted for image processing. To correct the input image, a 6 × 4 checkerboard image was captured using the camera, its corner points were identified, and the camera matrix and distortion coefficients were In this study, OpenCV's built-in functions for checkerboard pattern identification, corner point identification, and camera calibration were adopted for image processing. To correct the input image, a 6 × 4 checkerboard image was captured using the camera, its corner points were identified, and the camera matrix and distortion coefficients were calculated based on the points obtained. Figure 3a depicts the identification of the corner points in the original image and Figure 3b depicts the screen after the removal of distortion.

Image Rectification
Parallel stereo camera configuration is the method that involves utilizing two cameras whose optical axes are parallel. It is particularly suitable for image processing because of the absence of vertical disparity [30]. In contrast, actual photographs require image rectification to correct the vertical disparity originating from the installation or internal parameters of cameras. This method corrects for an arbitrary object in the left and right images obtained with dual cameras to obtain equal coordinates for the height of images.
In this study, OpenCV's built-in stereo calibration and stereo rectification functions were adopted for image processing. In addition, the checkerboard image utilized during the removal of the image distortion was used to identify the checkerboard pattern and its corner points and calibrate the dual cameras ( Figure 4a). As depicted in Figure 4b, the internal parameters, rotation matrix of the dual-camera configuration, and projection matrix on the rectified coordinate system can be obtained based on a pair of checkerboard images captured using the dual cameras.

Image Rectification
Parallel stereo camera configuration is the method that involves utilizing two cameras whose optical axes are parallel. It is particularly suitable for image processing because of the absence of vertical disparity [30]. In contrast, actual photographs require image rectification to correct the vertical disparity originating from the installation or internal parameters of cameras. This method corrects for an arbitrary object in the left and right images obtained with dual cameras to obtain equal coordinates for the height of images.
In this study, OpenCV's built-in stereo calibration and stereo rectification functions were adopted for image processing. In addition, the checkerboard image utilized during the removal of the image distortion was used to identify the checkerboard pattern and its corner points and calibrate the dual cameras ( Figure 4a). As depicted in Figure 4b, the internal parameters, rotation matrix of the dual-camera configuration, and projection matrix on the rectified coordinate system can be obtained based on a pair of checkerboard images captured using the dual cameras.

Focal Length Correction
Dual cameras were installed collinearly such that the optical axes of the two cameras are parallel. Furthermore, the lenses were positioned at identical heights above the ground. The 3D coordinates of any object were calculated relative to the camera positions, based on the geometry and triangulation of the cameras depicted in Figure 5. It can be described as follows: where: XYZ-the coordinates of the object; the local coordinate system with their origins at the center of dual cameras.

Focal Length Correction
Dual cameras were installed collinearly such that the optical axes of the two cameras are parallel. Furthermore, the lenses were positioned at identical heights above the ground. The 3D coordinates of any object were calculated relative to the camera positions, based on the geometry and triangulation of the cameras depicted in Figure 5. It can be described as follows: where: XYZ-the coordinates of the object; the local coordinate system with their origins at the center of dual cameras. f -focal length. b-baseline.  The focal length is an essential parameter in the calculation of the Z-coordinate. However, a problem with the use of inexpensive webcams is that some manufacturers do not provide details such as focal length. Further errors originating from image correction necessitate an accurate estimation of the focal length.
This can be achieved by employing curve-fitting based on actual data. Based on the relationship between distance and disparity, where is calculated from the equation: where: XYZ-the coordinates of the object in the universal Cartesian coordinate system.
-distance to the object. α, -the coefficients obtained via the focal length correction.
During testing, images of objects installed at intervals of 0.5 m over the range of 1-5 m were captured. In addition, the differences between the X-coordinates of each object captured by the two cameras were recorded. Then, α and were evaluated by fitting the curve described by the differences calculated via the least square method. The focal length is an essential parameter in the calculation of the Z-coordinate. However, a problem with the use of inexpensive webcams is that some manufacturers do not provide details such as focal length. Further errors originating from image correction necessitate an accurate estimation of the focal length.
This can be achieved by employing curve-fitting based on actual data. Based on the relationship between distance and disparity, where Z a is calculated from the equation: where: XYZ-the coordinates of the object in the universal Cartesian coordinate system. Z a -distance to the object. α,β-the coefficients obtained via the focal length correction.
During testing, images of objects installed at intervals of 0.5 m over the range of 1-5 m were captured. In addition, the differences between the X-coordinates of each object captured by the two cameras were recorded. Then, α and β were evaluated by fitting the curve described by the differences calculated via the least square method.

Mounting Heights of Cameras
In the proposed equation, the distance is measured relative to the ground. It is evident that the mounting height of the cameras is inversely proportional to the fraction of the ground captured in the image. Therefore, the mounting height of the cameras wields a significant influence in the determination of the region captured in the image.
Heights of 30 cm, 40 cm, and 50 cm were considered. In the case of regular passenger cars, 30 cm was selected as the minimum value because their bumpers are at least 30 cm above the ground level. The maximum value was set to 50 cm because larger heights made it difficult to capture the ground within 1 m. Figure 6 depicts the input images corresponding to heights of 30 cm, 40 cm, and 50 cm where a baseline of cameras is 30 cm, and an angle of inclination of mounted cameras is 12 • . dent that the mounting height of the cameras is inversely proportional to the fraction of the ground captured in the image. Therefore, the mounting height of the cameras wields a significant influence in the determination of the region captured in the image.
Heights of 30 cm, 40 cm, and 50 cm were considered. In the case of regular passenger cars, 30 cm was selected as the minimum value because their bumpers are at least 30 cm above the ground level. The maximum value was set to 50 cm because larger heights made it difficult to capture the ground within 1 m. Figure 6 depicts the input images corresponding to heights of 30 cm, 40 cm, and 50 cm where a baseline of cameras is 30 cm, and an angle of inclination of mounted cameras is 12°.

Baseline of Cameras
Equation (1) is based on the geometry and triangulation of the cameras. Therefore, the baseline between the cameras significantly affects the measurement of distance.
Baselines of 10 cm, 20 cm, and 30 cm were considered in this study. First, 10 cm was selected as the minimum value because it was the smallest feasible baseline. Then, the baseline was increased three times at intervals of 10 cm to examine its influence. Figure 7 depicts the input images corresponding to baselines of 10 cm, 20 cm, and 30 cm, where the mounting heights of the cameras are 40 cm, and the angle of inclination of the mounted cameras is 12°.

Baseline of Cameras
Equation (1) is based on the geometry and triangulation of the cameras. Therefore, the baseline between the cameras significantly affects the measurement of distance.
Baselines of 10 cm, 20 cm, and 30 cm were considered in this study. First, 10 cm was selected as the minimum value because it was the smallest feasible baseline. Then, the baseline was increased three times at intervals of 10 cm to examine its influence.

Angle of Inclination of Mounted Cameras
The installation of cameras parallel to the ground reduces the vertical range and hinders close-range supervision of the ground. Therefore, it is essential to utilize an optimal angle of inclination during the installation of cameras.
Angles of 3°, 7°, and 12° were considered as feasible angles of inclination. First, 3° was selected as the minimum value owing to the difficulty of capturing the ground within a radius of 1 m at smaller angles of inclination from a height of 50 cm. The proportion of the road captured in the image increased as the angle was increased. However, vehicular turbulence or the presence of ramps was observed to affect the inclusion of the upper part of the road in the images. Further, 12° was selected as the maximum angle of inclination as it yielded images with the road accounted for 20-80% of the vertical range.

Angle of Inclination of Mounted Cameras
The installation of cameras parallel to the ground reduces the vertical range and hinders close-range supervision of the ground. Therefore, it is essential to utilize an optimal angle of inclination during the installation of cameras.
Angles of 3 • , 7 • , and 12 • were considered as feasible angles of inclination. First, 3 • was selected as the minimum value owing to the difficulty of capturing the ground within a radius of 1 m at smaller angles of inclination from a height of 50 cm. The proportion of the road captured in the image increased as the angle was increased. However, vehicular turbulence or the presence of ramps was observed to affect the inclusion of the upper part of the road in the images. Further, 12 • was selected as the maximum angle of inclination as it yielded images with the road accounted for 20-80% of the vertical range.

Test Results for Optimization of Mounting Positions
During testing, dual cameras were mounted on the vehicle, and objects were installed at distances ranging between 1 m-5 m at intervals of 0.5 m on an actual road. Then, the differences between the X-coordinates captured by the left and right cameras were computed and substituted into Equation (2)

Test Results for Optimization of Mounting Positions
During testing, dual cameras were mounted on the vehicle, and objects were installed at distances ranging between 1 m-5 m at intervals of 0.5 m on an actual road. Then, the differences between the X-coordinates captured by the left and right cameras were computed and substituted into Equation (2) Figure 10a-c illustrates the variation in the degree of precision with respect to varying angles, corresponding to baselines of 10 cm, 20 cm, and 30 cm, respectively. Figure 11 depicts the test results corresponding to a height of 50 cm. Figure 11a-c illustrates the variation in the degree of precision with respect to varying angles, corresponding to baselines of 10 cm, 20 cm, and 30 cm, respectively.   Figure 10a-c illustrates the variation in the degree of precision with respect to varying angles, corresponding to baselines of 10 cm, 20 cm, and 30 cm, respectively. Figure 11 depicts the test results corresponding to a height of 50 cm. Figure 11a-c illustrates the variation in the degree of precision with respect to varying angles, corresponding to baselines of 10 cm, 20 cm, and 30 cm, respectively. Table 1 summarizes the result of Figures 9-11. It is evident from Figures 9-11 that the error rate exhibited a decreasing tendency as the angle was increased. Meanwhile, it tended to decrease as the baseline was increased. Finally, the error rate decreased when the height was increased from 30 cm to 40 cm, and it increased when the height was increased to 50 cm.   Table 1 summarizes the result of Figures 9-11. It is evident from Figures 9-11 that the error rate exhibited a decreasing tendency as the angle was increased. Meanwhile, it tended to decrease as the baseline was increased. Finally, the error rate decreased when the height was increased from 30 cm to 40 cm, and it increased when the height was increased to 50 cm. Based on the aforementioned data, the best result was obtained corresponding to a height of 40 cm, a baseline of 30 cm, and an angle of inclination of 12 • . In the next section, these values are used to validate the theoretical equation of distance measurement.

Measurement of the Distance to an Object in Front of the Vehicle on a Straight Road
The Z-coordinate of an object in front of the vehicle was obtained by using the coefficient α in Equation (2) (as evaluated via focal length correction) as the focal length and substituting it into Equation (1). However, during testing, the cameras were installed at an angle of inclination θ, to capture the close-range ground. That is, the optical axes of the cameras and the ground were not parallel during testing.
The Z-coordinate of the object relative to the position of the camera can be calculated considering the angle θ in Equation (3): where: X g Y g Z g -the coordinates of the object considering the angle of inclination of mounted cameras; the local coordinate system with their origins at the center of dual cameras. θ-the angle of inclination of the mounted cameras. h-mounting heights of the cameras.
On a straight road similar to that depicted in Figure 12, the calculation of the distance between the cameras and the object in front of the vehicle requires only an estimation of the longitudinal vertical distance. Therefore, Z g can be considered to be the distance between the cameras and the object in front of the vehicle. Based on the aforementioned data, the best result was obtained corresponding to a height of 40 cm, a baseline of 30 cm, and an angle of inclination of 12°. In the next section, these values are used to validate the theoretical equation of distance measurement.

Measurement of the Distance to an Object in Front of the Vehicle on a Straight Road
The Z-coordinate of an object in front of the vehicle was obtained by using the coefficient α in Equation (2) (as evaluated via focal length correction) as the focal length and substituting it into Equation (1). However, during testing, the cameras were installed at an angle of inclination , to capture the close-range ground. That is, the optical axes of the cameras and the ground were not parallel during testing.
The Z-coordinate of the object relative to the position of the camera can be calculated considering the angle in Equation (3): where: -the coordinates of the object considering the angle of inclination of mounted cameras; the local coordinate system with their origins at the center of dual cameras.
-the angle of inclination of the mounted cameras. ℎ-mounting heights of the cameras.
On a straight road similar to that depicted in Figure 12, the calculation of the distance between the cameras and the object in front of the vehicle requires only an estimation of the longitudinal vertical distance. Therefore, can be considered to be the distance between the cameras and the object in front of the vehicle.

Measurement of the Distance to an Object in Front of the Vehicle on a Curved Road
On a curved road similar to that depicted in Figure 13, the radius of curvature of the road should be incorporated into the measurement of the distance to the object in front of the vehicle. Therefore, after calculating the vertical distance using the object's X-and Zcoordinates, the distance to the object in front of the vehicle was calculated by considering the radius of curvature: where: ℎ -the vertical distance between the vehicle and object.

Measurement of the Distance to an Object in Front of the Vehicle on a Curved Road
On a curved road similar to that depicted in Figure 13, the radius of curvature of the road should be incorporated into the measurement of the distance to the object in front of the vehicle. Therefore, after calculating the vertical distance using the object's Xand Z-coordinates, the distance to the object in front of the vehicle was calculated by considering the radius of curvature: where: chord-the vertical distance between the vehicle and object.
An angle ϕ is subtended at the center of the curvature and the vertical distance from the camera position to the object in front of the vehicle. An angle ϕ can be calculated as follows: where: ϕ-the angle subtended by the vehicle and the object at the center of curvature of the road R-the radius of curvature of the road. The length of the arc of the circle corresponding to the aforementioned chord was calculated using ϕ and R, by applying Equation (6): where: arc-the distance between the vehicle and the object along the curved road. follows: where: φ-the angle subtended by the vehicle and the object at the center of curvature of the road -the radius of curvature of the road.
The length of the arc of the circle corresponding to the aforementioned ℎ was calculated using φ and R, by applying Equation (6): where: arc-the distance between the vehicle and the object along the curved road.

Integrated Equation
In the preceding two subsections, equations have been proposed for distance measurement on straight and curved roads. The radius of curvature of the curved road is inversely proportional to the difference between the measured distances on the straight and curved roads. Therefore, if the radius of curvature is larger than a certain threshold, the curved road can be assumed to be approximately straight without a significant loss of accuracy. When the radius of curvature was 1293 m, an error rate of at most 0.1% was observed. Therefore, 1293 m was adopted as the aforementioned threshold in the proposed equation. This is written as follows: where: -the distance between the vehicle and the object in front of the vehicle.

Integrated Equation
In the preceding two subsections, equations have been proposed for distance measurement on straight and curved roads. The radius of curvature of the curved road is inversely proportional to the difference between the measured distances on the straight and curved roads. Therefore, if the radius of curvature is larger than a certain threshold, the curved road can be assumed to be approximately straight without a significant loss of accuracy. When the radius of curvature was 1293 m, an error rate of at most 0.1% was observed. Therefore, 1293 m was adopted as the aforementioned threshold in the proposed equation. This is written as follows: where: Z t -the distance between the vehicle and the object in front of the vehicle.

Vehicle Used for Vehicle Test
The vehicle test was conducted to verify the accuracy of the forward distance measurement equation after mounting the dual camera setup at the optimized positions. H company's Veracruz (Figure 14) was used as the test vehicle, and its specifications are listed in Table 2.

Vehicle Used for Vehicle Test
The vehicle test was conducted to verify the accuracy of the forward distance measurement equation after mounting the dual camera setup at the optimized positions. H company's Veracruz (Figure 14) was used as the test vehicle, and its specifications are listed in Table 2.  The dual cameras were mounted on the front bumper of the vehicle used for the test, as depicted in Figure 15. Each camera was a Logitech C920 HD Pro Webcam, and its specifications are listed in Table 3.  The dual cameras were mounted on the front bumper of the vehicle used for the test, as depicted in Figure 15. Each camera was a Logitech C920 HD Pro Webcam, and its specifications are listed in Table 3.  Figure 15. Test device.

Vehicle Test Location and Conditions
Because of safety considerations, the vehicle test was conducted within the Seongseo Campus of Keimyung University, located in Daegu Metropolitan City, Korea. Figure 16 depicts the straight and curved roads utilized for testing. The radius of curvature of the curved road was 69 m; it was calculated as the radius of a planar curve based on the design speed defined in article 19 of the rules for the structure and facility standards of roads, which are presented in Table 4. A curved road with a radius of curvature of at most 80 m was selected, considering the driving speed limit of 50 km/h in the city. C920 HD pro webcam (camera) field of view: 78° field of view (horizontal): 70.42° field of view (vertical): 43.3° image resolution: 1920 × 1080p focal length: 3.67 mm

Vehicle Test Location and Conditions
Because of safety considerations, the vehicle test was conducted within the Seongseo Campus of Keimyung University, located in Daegu Metropolitan City, Korea. Figure 16 depicts the straight and curved roads utilized for testing. The radius of curvature of the curved road was 69 m; it was calculated as the radius of a planar curve based on the design speed defined in article 19 of the rules for the structure and facility standards of roads, which are presented in Table 4. A curved road with a radius of curvature of at most 80 m was selected, considering the driving speed limit of 50 km/h in the city.   120  710  670  630  110  600  560  530  100  460  440  420  90  380  360  340  80  280  265  250  70  200  190  180  60  140  135  130  50  90  85  80 In the case of the straight road, stationary and driving states were classified using obstacles installed at intervals of 10 m between distances of 10 m and 40 m in front of the vehicle. In the case of the curved road, the two states were classified using obstacles installed at 5 m intervals between 6 m and 21 m in front of the vehicle along the center of   120  710  670  630  110  600  560  530  100  460  440  420  90  380  360  340  80  280  265  250  70  200  190  180  60  140  135  130  50  90  85  80 In the case of the straight road, stationary and driving states were classified using obstacles installed at intervals of 10 m between distances of 10 m and 40 m in front of the vehicle. In the case of the curved road, the two states were classified using obstacles installed at 5 m intervals between 6 m and 21 m in front of the vehicle along the center of the road and on the left and right lane markers. The entire test was conducted corresponding to a total of four cases.
The tests were repeated three times using the same equipment to acquire objective data. The environmental conditions during the experiments are summarized in Table 5. There was no variation in the weather.

Test Results
The vehicle test was conducted corresponding to four cases of stationary and driving states on the straight and curved roads. Figure 17 depicts the post-correction images captured by the dual-camera setup used to measure the distance to the object in front of the vehicle. Table 6 summarizes the deviations of the theoretically calculated distances from the actual distances.

Test Results
The vehicle test was conducted corresponding to four cases of stationary and driving states on the straight and curved roads. Figure 17 depicts the post-correction images captured by the dual-camera setup used to measure the distance to the object in front of the vehicle. Table 6 summarizes the deviations of the theoretically calculated distances from the actual distances.     In the case of the stationary state on the straight road, the objects placed at various distances in front of the vehicle were identified. The maximum error was observed to be 3.54%, corresponding to the 30 m point.
In the case of the driving state on the straight road, the objects at distances of 10 m and 20 m in front of the vehicle were identified, whereas those farther away were not. This can be attributed to factors such as vehicular turbulence, variations in illumination, and transmission of vibration to the cameras, caused by the driving state. The maximum error was observed to be 5.35% at the 20 m point.
In the case of the stationary state on the curved road, the objects at distances between 5 m and 20 m in front of the vehicle were identified. The maximum error was observed to be 9.13%, corresponding to the 20 m point.
In the case of the driving state on the curved road, the objects at distances between 5 m and 15 m in front of the vehicle were identified, whereas those at a distance of 20 m were not. Similar to the case of the straight road, this was attributed to factors such as vehicular turbulence, variations in illumination, and transmission of vibrations to cameras. The maximum error was observed to be 9.40%, corresponding to the 6 m point.
The test results demonstrate that the error in the measurement of the distance between the vehicle and objects in front of it increases when the object is detected inaccurately owing to factors such as vehicular turbulence, variations in illumination, and transmission of vibrations to the cameras. Furthermore, the error tends to be relatively large in the case of the driving state on a curved road compared with that on a straight road; this error is affected by the fixed radius of curvature used in the calculation process.

Conclusions
In this study, correction of camera images and lane detection on roads were performed for vehicle tests and evaluation. Furthermore, the mounting positions of cameras were optimized in terms of three variables: height, baseline, and angle of inclination. Equations to measure the distance to an object in front of the vehicle on straight and curved roads were proposed. These were validated via the vehicle tests by classifying stationary and driving states. The results are summarized below: (1) Dual camera images were used for lane detection. The ROI was selected so as to reduce the duration required for image processing, and the yellow color was extracted to HSV channels. Then, the result was combined with a grayscale conversion of the input image. Following edge detection using the Canny edge detector, the Hough transform was used to obtain the initial and final points of each straight line. After calculating the gradients of the straight lines, the lane was filtered and determined. (2) Height, inter-camera baseline, and angle of inclination were considered as variables for optimizing the mounting positions of the dual cameras on the vehicle. Vehicle tests were conducted on actual roads after mounting the dual cameras on a real vehicle.
The test results revealed that the error rate was the smallest (0.86%), corresponding to a height of 40 cm, a baseline of 30 cm, and an angle of 12 • . Hence, this was considered to be the optimal position. (3) Theoretical equations were proposed for the measurement of the distance between the vehicle and an object in front of it on straight and curved roads. The dual cameras were mounted on the identified optimal positions to validate the proposed equations. Vehicle tests were conducted corresponding to stationary and driving states on straight and curved roads. On the straight road, maximum error rates of 3.54% and 5.35% were observed corresponding to the stationary and driving states, respectively. Meanwhile, on the curved road, the corresponding values were 9.13% and 9.40%, respectively. Because the error rates were less than 10%, the proposed equation for the measurement of the distance to objects in front of a vehicle was considered to be reliable.
To summarize, the mounting positions of the cameras were optimized via vehicle tests using the dual cameras, and image correction and lane detection were performed. Furthermore, the proposed theoretical equation for measuring the distance between the vehicle and objects in front of it was verified via vehicle tests, with obstacles placed at the selected positions.
The aforementioned results are significant for the following reasons. These results establish that expensive equipment and professional personnel are not required for autonomous vehicle tests, enabling research and development focused on facilitating autonomous driving using only cameras as sensors. Furthermore, webcams with easy availability can also be applied without additional sensors to the testing and evaluation of autonomous driving. In the future, we expect tests to be conducted on ACC, LKAS, and HDA at the respective levels of vehicle automation.