Parking Space and Obstacle Detection Based on a Vision Sensor and Checkerboard Grid Laser

: The accuracy of automated parking technology that uses ultrasonic radar or camera vision for obstacles and parking space identiﬁcation can easily be a ﬀ ected by the surrounding environment especially when the color of the obstacles is similar to the ground. Additionally, this type of system cannot recognize the size of the obstacles detected. This paper proposes a method to identify parking spaces and obstacles based on visual sensor and laser device recognition methods by installing a laser transmitter on the car. The laser transmitter produces a checkerboard-shaped laser grid (mesh), which varies with the condition encountered on the ground, which is then captured by the camera and taken as the region of interest for the necessary image processing. The experimental results show that this method can e ﬀ ectively identify obstacles as well as their size and parking spaces even when the obstacles and the background have a similar color compared to when only using sensors or cameras alone.


Introduction
The advancements of science and technology, coupled with the rapid improvement in the standard of living, have resulted in higher demand for intelligent vehicles. One such demand aspect of intelligent vehicles is the Automatic Parking System (APS). As a result of rapid developments in most cities around the world, parking spaces are becoming narrower, and parking a vehicle without the guidance of a third party is becoming quite problematic and takes too much time to accomplish even with a parking guide. However, with the rapid development of APS technology, this task will be accomplished with ease and with minimal time needed without a parking guide. This system uses sensors or cameras to assess the parking situation and automatically adjust the angle of the steering device to accomplish a successful parking process [1,2].
APS that use sensors like ultrasonic radar sensors to find parking spaces have the advantage of high-speed data processing and long detection distance, free from the influence of light. However, ultrasonic radar cannot distinguish the types of obstacles, and it is easy to misjudge when the ultrasonic radar scans low barriers or potholes on the ground [3,4].
In addition to the ultrasonic radar, the parking space can also be recognized by the use of camera vision. The advantage of the camera is that it can acquire a large amount of data about the surrounding environment and can detect inclined objects. However, it is greatly influenced by the surrounding environment, and the accuracy of the recognized obstacles on the ground is low in situations where the color of the obstacles is similar to the color of the ground. Additionally, monocular cameras cannot  After acquiring the environment image of the parking space, the system first preprocesses the image, which mainly includes gamma conversion image enhancement, image graying, mean filtering by smoothing and denoising, and binarized edge detection. Then, the region of interest will be extracted. After that, contour detection and convex hull detection are performed in the region of interest. Finally, the contour and convex hull will be displayed in the images. The algorithm flow is shown in Figure 3. After acquiring the environment image of the parking space, the system first preprocesses the image, which mainly includes gamma conversion image enhancement, image graying, mean filtering by smoothing and denoising, and binarized edge detection. Then, the region of interest will be extracted. After that, contour detection and convex hull detection are performed in the region of interest. Finally, the contour and convex hull will be displayed in the images. The algorithm flow is shown in Figure 3.

Realization of Checkerboard Laser Grid
The effect of the grid laser emitter on the ground is shown in Figure 4. The laser emitter illuminates the ground to present a checkerboard grid effect, and the laser line in the grid will change shape when illuminating obstacles on the ground. Moreover, the cameras can capture these shape changes for further image processing or machine learning to identify obstacles. Due to the high brightness, high directivity, and high monochromaticity of the laser, when the camera captures the laser mesh, the corresponding filter or the corresponding color detection in image processing can

Realization of Checkerboard Laser Grid
The effect of the grid laser emitter on the ground is shown in Figure 4. The laser emitter illuminates the ground to present a checkerboard grid effect, and the laser line in the grid will change shape when Appl. Sci. 2020, 10, 2582 4 of 17 illuminating obstacles on the ground. Moreover, the cameras can capture these shape changes for further image processing or machine learning to identify obstacles. Due to the high brightness, high directivity, and high monochromaticity of the laser, when the camera captures the laser mesh, the corresponding filter or the corresponding color detection in image processing can improve the capture of the special detection of the mesh and reduce the amount of image processing in the later stage.

Realization of Checkerboard Laser Grid
The effect of the grid laser emitter on the ground is shown in Figure 4. The laser emitter illuminates the ground to present a checkerboard grid effect, and the laser line in the grid will change shape when illuminating obstacles on the ground. Moreover, the cameras can capture these shape changes for further image processing or machine learning to identify obstacles. Due to the high brightness, high directivity, and high monochromaticity of the laser, when the camera captures the laser mesh, the corresponding filter or the corresponding color detection in image processing can improve the capture of the special detection of the mesh and reduce the amount of image processing in the later stage. As shown in Figure 4, the laser emitter is obliquely irradiated to the ground; the center line of the laser emitter is at α degrees to the horizontal plane; the laser emitter is at h2 above the horizontal ground; and the distance between the laser emitter and the checkerboard laser grid is l.
As shown in Figure 5, in an ideal case, the laser emitter is illuminated on the level ground, presenting a uniformly distributed laser grid with a single grid length of x0, and the width is y0. In reality, the laser grid is deformed by the laser emitter tilted on the ground.
As shown in Figure 6, the laser network is an 8×8 grid; the laser line closest to the laser transmitter is the X-axis, and the central axis of the laser network is the Y-axis. The vertical distance between the flat laser line and the origin is 0, y1, y2, …, y8, respectively. Due to the deformation of the vertical laser line, the endpoint is from −x4, −x3, …, 0, …, x3, x4 to −x4', −x3', …, 0, …, x3', x4'. Therefore, the X-direction of each grid is changed to: As shown in Figure 4, the laser emitter is obliquely irradiated to the ground; the center line of the laser emitter is at α degrees to the horizontal plane; the laser emitter is at h 2 above the horizontal ground; and the distance between the laser emitter and the checkerboard laser grid is l.
As shown in Figure 5, in an ideal case, the laser emitter is illuminated on the level ground, presenting a uniformly distributed laser grid with a single grid length of x 0 , and the width is y 0 . In reality, the laser grid is deformed by the laser emitter tilted on the ground.
Furthermore, the coordinates of each grid intersection can be calculated when the obstacle appears in the laser grid. The coordinate range of the obstacle according to the position of the deformation can also be evaluated.   As shown in Figure 6, the laser network is an 8×8 grid; the laser line closest to the laser transmitter is the X-axis, and the central axis of the laser network is the Y-axis. The vertical distance between the flat laser line and the origin is 0, y 1 , y 2 , . . . , y 8 , respectively. Due to the deformation of the vertical laser line, the endpoint is from −x 4 , −x 3 , . . . , 0, . . . , x 3 , x 4 to −x 4 ', −x 3 ', . . . , 0, . . . , x 3 ', x 4 '. Therefore, the X-direction of each grid is changed to: Appl. Sci. 2020, 10, 2582 5 of 17   Furthermore, the coordinates of each grid intersection can be calculated when the obstacle appears in the laser grid. The coordinate range of the obstacle according to the position of the deformation can also be evaluated.
As shown in Figure

Image Acquisition
The image is captured by 360° cameras. The 360° cameras are equipped with four camera components mounted on the front, rear, left, and right sides of the car. When installing the cameras, the front camera is installed on the logo or the grill, and the rear camera is installed near the license plate light or trunk handle, as close as possible to the central axis. Left and right cameras are installed under the left and right rearview mirrors. The number and position of the cameras can also be adjusted according to the actual situation. Wide-angle cameras can be used, which can take complete environmental images around the vehicle at the same time. Images of the surrounding environment are simultaneously collected through the 360° cameras. After the image correction and image mosaicking, a 360° image is formed [16].

Camera Calibration
The calibration of the cameras mainly involves the transformation of four coordinate systems: world coordinate system, camera coordinate system, image plane coordinate system, and image pixel coordinate system. The calibration methods mainly include the Zhengyou Zhang calibration method [17], the RAC (Radial Alignment Constraint) two-stage method of Tsai, and the DLT (Direct Linear Transform) method [18][19][20].
Using a given known calibrator, the external parameter rotation matrix of camera R, translation

Image Acquisition
The image is captured by 360 • cameras. The 360 • cameras are equipped with four camera components mounted on the front, rear, left, and right sides of the car. When installing the cameras, the front camera is installed on the logo or the grill, and the rear camera is installed near the license plate light or trunk handle, as close as possible to the central axis. Left and right cameras are installed under the left and right rearview mirrors. The number and position of the cameras can also be adjusted according to the actual situation. Wide-angle cameras can be used, which can take complete environmental images around the vehicle at the same time. Images of the surrounding environment are simultaneously collected through the 360 • cameras. After the image correction and image mosaicking, a 360 • image is formed [16].

Camera Calibration
The calibration of the cameras mainly involves the transformation of four coordinate systems: world coordinate system, camera coordinate system, image plane coordinate system, and image pixel coordinate system. The calibration methods mainly include the Zhengyou Zhang calibration method [17], the RAC (Radial Alignment Constraint) two-stage method of Tsai, and the DLT (Direct Linear Transform) method [18][19][20].
Using a given known calibrator, the external parameter rotation matrix of camera R, translation matrix T, principal point coordinates of parametric images (C x , C y ), the height and width of a single-pixel S x , S y , focal length f, and distortion factor k can be calculated.

Image Preprocessing
The image acquired by the camera is affected by the surrounding noise and the environment, so that the target object to be recognized is not highly distinguishable from the surrounding environment image and cannot be directly used. Therefore, there is a need for image preprocessing of the original image.

Grayscale Processing
The process of transforming a color image into a gray image is called gray processing of the image. According to the principle of the three primary colors of red, green, and blue, any color F can be mixed with different ratios of R, G, and B.
The color depth of a pixel in a gray image is called the gray value. The difference between a grayscale image and a black-and-white image is that the grayscale image contains the concept of color depth, which is the gray level. Since the color image has three channels and the gray image has only one channel, the processing speed for the color image is slow compared to the gray image. The grayscale processed image can greatly reduce the amount of subsequent calculation, and the information contained in the grayscale image is enough for calculation and analysis.
The image can be converted into a grayscale image by decomposing the color of color pixels into R, G, and B components using the following formula.
When all the pixels in the color image are transformed by the above formula, the color image will be converted into a grayscale image [21].

Smoothing and Denoising of Images
In the process of acquiring and transmitting the image can be affected by noise. This noise causes the image quality to degrade and can mask some of the characteristics of the image, which have the potential to make the analysis of the image difficult. While removing noise from the target image, it is also necessary to retain the details of the original image as much as possible, and the quality of the processing will directly affect the effectiveness and reliability of subsequent image processing and analysis. Commonly used methods for smoothing and denoising include mean filtering, median filtering, bilateral filtering, and Gaussian filtering [22].

Mean Filtering
In this paper, the smoothing and denoising method of mean filtering is adopted. The mean filtering is a typical linear filtering algorithm, which can help eliminate the sharp noise of images and realize image smoothing and blurring.
The mean filtering algorithm replaces the pixel points in the image by selecting an appropriate template operator and replaces the gray value of the pixel with a weighted average of the gray pixel values in its neighborhood. After smoothing and denoising by means of the mean filter algorithm, image f (i, j) is transformed into image g(x, y), and the equation is: In the formula, M is the total number of pixels, including the current pixel in the template. Generally, the template operators are m × m. If the template operator is 3 × 3, the total number of template pixels M is nine. The central pixel value is calculated using the following formula: As shown in Figure 8, after the mean filtering, the value of the center pixel becomes: filtering, bilateral filtering, and Gaussian filtering [22].

Mean Filtering
In this paper, the smoothing and denoising method of mean filtering is adopted. The mean filtering is a typical linear filtering algorithm, which can help eliminate the sharp noise of images and realize image smoothing and blurring.
The mean filtering algorithm replaces the pixel points in the image by selecting an appropriate template operator and replaces the gray value of the pixel with a weighted average of the gray pixel values in its neighborhood. After smoothing and denoising by means of the mean filter algorithm, image f(i, j) is transformed into image g x, y , and the equation is: In the formula, M is the total number of pixels, including the current pixel in the template. Generally, the template operators are m × m. If the template operator is 3 × 3, the total number of template pixels M is nine. The central pixel value is calculated using the following formula: As shown in Figure 8, after the mean filtering, the value of the center pixel becomes:

Image Enhancement Technology
Image enhancement is necessary to either improve the image quality or the image interpretation and recognition effect by emphasizing the overall or local characteristics of the image. The commonly used techniques in image enhancement include histogram equalization, Laplacian, log transform, gamma transform, and image enhancement based on the fuzzy technique [23].

Gamma Transform
The gamma transform is the enhancement technique that is adopted in this paper. It is mainly used for image correction, and the image with too high a gray level or low a gray level is corrected to enhance contrast and image detail [24]. The formula is as follows: where r in is the pixel value of the image, which is a non-negative real number, and c is called the grayscale scaling coefficient, used for the overall stretched image grayscale, which is a constant, whose value ranges from zero to one, usually taking a value of one.
The correction effect of the gamma transform on the image is achieved by enhancing the details of the low gray or high gray scale, as shown in Figure 9. It can be intuitively understood from the gamma curve: γ takes one as the dividing line. When γ < 1, the intensity of light increases, and the extension effect on the low gray part of the image is enhanced, which is called gamma compression; when γ > 1, the expansion effect on the high gray portion of the image is enhanced, and the illumination intensity is weakened, which is called gamma expansion. Therefore, by taking different gamma values, the effect of enhancing the details of the low or high gray levels can be achieved. In general, the image enhancement effect of gamma transform is evident when the image contrast is low and the overall brightness value is high. extension effect on the low gray part of the image is enhanced, which is called gamma compression; when γ > 1, the expansion effect on the high gray portion of the image is enhanced, and the illumination intensity is weakened, which is called gamma expansion. Therefore, by taking different gamma values, the effect of enhancing the details of the low or high gray levels can be achieved. In general, the image enhancement effect of gamma transform is evident when the image contrast is low and the overall brightness value is high.
. As shown in Figure 10a,b, since the original image is too bright, it is difficult to separate the laser grid from the background. After the gamma conversion, the contrast of the image is greatly improved. The features of the laser grid are shown more clearly, which reduces the amount of processing in the next step. As shown in Figure 10a,b, since the original image is too bright, it is difficult to separate the laser grid from the background. After the gamma conversion, the contrast of the image is greatly improved. The features of the laser grid are shown more clearly, which reduces the amount of processing in the next step. extension effect on the low gray part of the image is enhanced, which is called gamma compression; when γ > 1, the expansion effect on the high gray portion of the image is enhanced, and the illumination intensity is weakened, which is called gamma expansion. Therefore, by taking different gamma values, the effect of enhancing the details of the low or high gray levels can be achieved. In general, the image enhancement effect of gamma transform is evident when the image contrast is low and the overall brightness value is high.
. As shown in Figure 10a,b, since the original image is too bright, it is difficult to separate the laser grid from the background. After the gamma conversion, the contrast of the image is greatly improved. The features of the laser grid are shown more clearly, which reduces the amount of processing in the next step.

Image Binarization
The representation of an image with the visual effects of only black and white is known as image binarization. This process reduces the amount of data in the image in order to highlight the contour of the target object and to separate it from the boundary. This is accomplished by applying a threshold on the image, which can be adjusted to observe the specific features of the target object in the image.

Feature Extraction and Recognition of Obstacles
Changes in the laser grid indicate a potential obstacle on the ground. This deformation region is then taken as the region of interest (ROI), and the necessary image processing is then applied to filter out the ROI for obstacle recognition.

Contour Detection
Processing the image results in pixel gray values of similarity and discontinuity, which makes it easy for contour boundary detection [25,26]. The effects of this are shown in Figure 11 below. then taken as the region of interest (ROI), and the necessary image processing is then applied to filter out the ROI for obstacle recognition.

Contour Detection
Processing the image results in pixel gray values of similarity and discontinuity, which makes it easy for contour boundary detection [25,26]. The effects of this are shown in Figure 11 below.

Convex Hull Detection
The convex hull is a concept in computational geometry or graphics, which may be defined as the smallest convex points containing the given points, which is the intersection of all convex points that contain the given points in the Euclidean plane or space. It can be imagined as a rubber band that just wraps all the points.
A useful method for understanding the shape contour of an object is to calculate the convex hull of the object and then calculate the convex defect [27]. The performance of many complex objects can be represented by this defect.
As shown in Figure 12, convex defects can be illustrated by human hands. The dark contour line is a convex hull around the hand. The regions (A-F) between the convex hull and the hand contour are convex defects.

Convex Hull Detection
The convex hull is a concept in computational geometry or graphics, which may be defined as the smallest convex points containing the given points, which is the intersection of all convex points that contain the given points in the Euclidean plane or space. It can be imagined as a rubber band that just wraps all the points.
A useful method for understanding the shape contour of an object is to calculate the convex hull of the object and then calculate the convex defect [27]. The performance of many complex objects can be represented by this defect.
As shown in Figure 12, convex defects can be illustrated by human hands. The dark contour line is a convex hull around the hand. The regions (A-F) between the convex hull and the hand contour are convex defects.
Changes in the laser grid indicate a potential obstacle on the ground. This deformation region is then taken as the region of interest (ROI), and the necessary image processing is then applied to filter out the ROI for obstacle recognition.

Contour Detection
Processing the image results in pixel gray values of similarity and discontinuity, which makes it easy for contour boundary detection [25,26]. The effects of this are shown in Figure 11 below.

Convex Hull Detection
The convex hull is a concept in computational geometry or graphics, which may be defined as the smallest convex points containing the given points, which is the intersection of all convex points that contain the given points in the Euclidean plane or space. It can be imagined as a rubber band that just wraps all the points.
A useful method for understanding the shape contour of an object is to calculate the convex hull of the object and then calculate the convex defect [27]. The performance of many complex objects can be represented by this defect.
As shown in Figure 12, convex defects can be illustrated by human hands. The dark contour line is a convex hull around the hand. The regions (A-F) between the convex hull and the hand contour are convex defects.

Division of Obstacle Areas
Changes in the laser grids on the ground indicate that there is a potential obstacle on the ground, which results in different pixel gray values for the background and the obstacles.
The obstacles and the background are captured by the discontinuity of the boundary regions between them through the adjustments of the contour threshold of the ground laser grid area, segmentation of the ground laser grid area, the obstacle layer, and locating the coordinate region where the obstacle is located through the laser mesh deformation area, as shown in Figure 13.
Changes in the laser grids on the ground indicate that there is a potential obstacle on the ground, which results in different pixel gray values for the background and the obstacles.
The obstacles and the background are captured by the discontinuity of the boundary regions between them through the adjustments of the contour threshold of the ground laser grid area, segmentation of the ground laser grid area, the obstacle layer, and locating the coordinate region where the obstacle is located through the laser mesh deformation area, as shown in Figure 13.

Parking Space Identification
The recognition of the parking space mainly involves the acquisition of the type of parking space and judging whether the identified parking space can meet the parking conditions. The types of parking spaces are shown in Table 1. We can classify them according to the parking modes, the existence of parking lines, and the existence of vehicles around. Because most research is mainly about parking spaces with parking lines and surrounding vehicles, this part only includes the situation without parking lines and vehicles on both sides. Because the parallel parking mode and the vertical parking mode are similar in terms of the identification methods, this part takes the latter as an example.
As shown in Figures 14 and 15, the laser emitting devices are illuminated on the ground to present the shape of the checkerboard grid. We divide the region of vehicles and parking space by acquiring the contours of vehicles when processing the captured image of the parking space. According to the contours of the A, B, D, and E cars, we can judge the posture of the vehicle body, thereby we can acquire the type of paring space, such as a vertical parking space and an oblique parking space. The laser grid area between two cars' contours can be considered as the area of the parking space. If there are no obstacles in this area, then the parking space can be considered as valid.

Parking Space Identification
The recognition of the parking space mainly involves the acquisition of the type of parking space and judging whether the identified parking space can meet the parking conditions. The types of parking spaces are shown in Table 1. We can classify them according to the parking modes, the existence of parking lines, and the existence of vehicles around. Because most research is mainly about parking spaces with parking lines and surrounding vehicles, this part only includes the situation without parking lines and vehicles on both sides. Because the parallel parking mode and the vertical parking mode are similar in terms of the identification methods, this part takes the latter as an example. As shown in Figures 14 and 15, the laser emitting devices are illuminated on the ground to present the shape of the checkerboard grid. We divide the region of vehicles and parking space by acquiring the contours of vehicles when processing the captured image of the parking space. According to the contours of the A, B, D, and E cars, we can judge the posture of the vehicle body, thereby we can acquire the type of paring space, such as a vertical parking space and an oblique parking space. The laser grid area between two cars' contours can be considered as the area of the parking space. If there are no obstacles in this area, then the parking space can be considered as valid.

Experiments
The experiment was made up of the hardware parts, as shown in Figure 16, and the software parts. The hardware consisted of a grid laser emitter with a 51 × 51 laser grid on the ground and a camera, while the software part consisted of image processing and obstacle detection by the on-board computer. The implementation of the software part was done using C++ with Visual Studio 2017 as the development environment using OpenCV library Version 3.4.4 for image processing and obstacle detection. In order to verify the effectiveness of the proposed method for identifying parking spaces and obstacles based on vision sensors and laser devices, various obstacles and scenes encountered in reallife parking environments were simulated and tested by the experimental platform in Figure 16. The laser transmitter and the camera mounted on the vehicle were simulated on the experimental

Experiments
The experiment was made up of the hardware parts, as shown in Figure 16, and the software parts. The hardware consisted of a grid laser emitter with a 51 × 51 laser grid on the ground and a camera, while the software part consisted of image processing and obstacle detection by the on-board computer. The implementation of the software part was done using C++ with Visual Studio 2017 as the development environment using OpenCV library Version 3.4.4 for image processing and obstacle detection.

Experiments
The experiment was made up of the hardware parts, as shown in Figure 16, and the software parts. The hardware consisted of a grid laser emitter with a 51 × 51 laser grid on the ground and a camera, while the software part consisted of image processing and obstacle detection by the on-board computer. The implementation of the software part was done using C++ with Visual Studio 2017 as the development environment using OpenCV library Version 3.4.4 for image processing and obstacle detection. In order to verify the effectiveness of the proposed method for identifying parking spaces and obstacles based on vision sensors and laser devices, various obstacles and scenes encountered in reallife parking environments were simulated and tested by the experimental platform in Figure 16. The laser transmitter and the camera mounted on the vehicle were simulated on the experimental In order to verify the effectiveness of the proposed method for identifying parking spaces and obstacles based on vision sensors and laser devices, various obstacles and scenes encountered in real-life parking environments were simulated and tested by the experimental platform in Figure 16. The laser transmitter and the camera mounted on the vehicle were simulated on the experimental platform, then the laser emitter and camera were adjusted to the appropriate angle so that the laser transmitter was evenly distributed on the ground to render a checkerboard laser grid effect while the parking space scenes were simulated through the model cars. As shown in Figures 17 and 18, after collecting the images of the simulated scenes, codes were written that called the function of the OpenCV library to preprocess the image and detect the contour and convex hull. In the process of image processing, only the laser grid on the ground was used as the region of interest, and the image threshold was changed to reveal the contour of the laser mesh. The effect diagram is shown in Figure 19.
image processing, only the laser grid on the ground was used as the region of interest, and the image threshold was changed to reveal the contour of the laser mesh. The effect diagram is shown in Figure  19.
The gray value of the background in the scenes, such as the wall and the parking line, was different from the gray value of the ground laser line; therefore, it would be filtered out. The deformation region of the laser grid on the ground was then taken as the region of interest, by adjusting the threshold of the image, revealing the contours of obstacles, as shown in Figure 20.    As shown in Figures 21 and 22, when there were obstacles and parking locks in the parking space, the contours could be adjusted by taking the laser grid and the deformation region of the laser threshold was changed to reveal the contour of the laser mesh. The effect diagram is shown in Figure  19.
The gray value of the background in the scenes, such as the wall and the parking line, was different from the gray value of the ground laser line; therefore, it would be filtered out. The deformation region of the laser grid on the ground was then taken as the region of interest, by adjusting the threshold of the image, revealing the contours of obstacles, as shown in Figure 20.    As shown in Figures 21 and 22, when there were obstacles and parking locks in the parking space, the contours could be adjusted by taking the laser grid and the deformation region of the laser

19.
The gray value of the background in the scenes, such as the wall and the parking line, was different from the gray value of the ground laser line; therefore, it would be filtered out. The deformation region of the laser grid on the ground was then taken as the region of interest, by adjusting the threshold of the image, revealing the contours of obstacles, as shown in Figure 20.    As shown in Figures 21 and 22, when there were obstacles and parking locks in the parking space, the contours could be adjusted by taking the laser grid and the deformation region of the laser The gray value of the background in the scenes, such as the wall and the parking line, was different from the gray value of the ground laser line; therefore, it would be filtered out. The deformation region of the laser grid on the ground was then taken as the region of interest, by adjusting the threshold of the image, revealing the contours of obstacles, as shown in Figure 20.
collecting the images of the simulated scenes, codes were written that called the function of the OpenCV library to preprocess the image and detect the contour and convex hull. In the process of image processing, only the laser grid on the ground was used as the region of interest, and the image threshold was changed to reveal the contour of the laser mesh. The effect diagram is shown in Figure  19.
The gray value of the background in the scenes, such as the wall and the parking line, was different from the gray value of the ground laser line; therefore, it would be filtered out. The deformation region of the laser grid on the ground was then taken as the region of interest, by adjusting the threshold of the image, revealing the contours of obstacles, as shown in Figure 20.    As shown in Figures 21 and 22, when there were obstacles and parking locks in the parking space, the contours could be adjusted by taking the laser grid and the deformation region of the laser As shown in Figures 21 and 22, when there were obstacles and parking locks in the parking space, the contours could be adjusted by taking the laser grid and the deformation region of the laser grid on the ground as the region of interest, respectively. It showed the area where the obstacle was located and the size of the obstacles, making it convenient for the identification of the parking space area and the judgment of the adequate parking space.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 13 of 17 grid on the ground as the region of interest, respectively. It showed the area where the obstacle was located and the size of the obstacles, making it convenient for the identification of the parking space area and the judgment of the adequate parking space. For the classification and experiment of different types of obstacles that may exist in the parking space in the actual parking environment, an experiment with regular obstacles, potholes, and obstacles with a similar color to the background was carried out, as shown in Figures 23 and 24, respectively. Nevertheless, when the color of the obstacle was similar to the background, it was challenging to identify the outline of the obstacle area through image processing alone. However, when we experimented with the proposed method of combining the camera vision and the laser grid, the area and contour of the obstacles could easily be obtained even when the obstacle was similar to the color of the background, as indicated in Figures 25 and 26. Table 2 shows the test results of different types of obstacles' recognition accuracy. Appl. Sci. 2020, 10, x FOR PEER REVIEW 13 of 17 grid on the ground as the region of interest, respectively. It showed the area where the obstacle was located and the size of the obstacles, making it convenient for the identification of the parking space area and the judgment of the adequate parking space. For the classification and experiment of different types of obstacles that may exist in the parking space in the actual parking environment, an experiment with regular obstacles, potholes, and obstacles with a similar color to the background was carried out, as shown in Figures 23 and 24, respectively. Nevertheless, when the color of the obstacle was similar to the background, it was challenging to identify the outline of the obstacle area through image processing alone. However, when we experimented with the proposed method of combining the camera vision and the laser grid, the area and contour of the obstacles could easily be obtained even when the obstacle was similar to the color of the background, as indicated in Figures 25 and 26. Table 2 shows the test results of different types of obstacles' recognition accuracy. For the classification and experiment of different types of obstacles that may exist in the parking space in the actual parking environment, an experiment with regular obstacles, potholes, and obstacles with a similar color to the background was carried out, as shown in Figures 23 and 24, respectively. Nevertheless, when the color of the obstacle was similar to the background, it was challenging to identify the outline of the obstacle area through image processing alone. However, when we experimented with the proposed method of combining the camera vision and the laser grid, the area and contour of the obstacles could easily be obtained even when the obstacle was similar to the color of the background, as indicated in Figures 25 and 26

Conclusions and Future Research
In order to overcome the problems existing in current automatic parking schemes, ultrasonic radar and cameras are not efficient in identifying adequate parking spaces and cannot identify obstacles in parking paths or parking spaces. This paper presents a method to identify parking spaces and obstacles based on visual sensor and laser device recognition methods. This method only needs to install the laser transmitter on the body of the car. The changing images of the laser mesh are captured by the camera, and then, the changing area of the laser grids can be identified by image processing technology to realize the identification of parking spaces and obstacles. This method is expected to become a practical solution. The experimental results showed that this method could effectively identify obstacles and parking spaces.
For future research, more obstacle samples will be taken into consideration. The methods to deal with the automatic classification of obstacles need to be found. In addition, experiments for identifying the parking space and acquiring the size of the parking space, as well as obstacles still need to be conducted. Overall, this research provided a new solution to recognize obstacles for automatic parking. The future research will also greatly help the development of this technology.

Conclusions and Future Research
In order to overcome the problems existing in current automatic parking schemes, ultrasonic radar and cameras are not efficient in identifying adequate parking spaces and cannot identify obstacles in parking paths or parking spaces. This paper presents a method to identify parking spaces and obstacles based on visual sensor and laser device recognition methods. This method only needs to install the laser transmitter on the body of the car. The changing images of the laser mesh are captured by the camera, and then, the changing area of the laser grids can be identified by image processing technology to realize the identification of parking spaces and obstacles. This method is expected to become a practical solution. The experimental results showed that this method could effectively identify obstacles and parking spaces.
For future research, more obstacle samples will be taken into consideration. The methods to deal with the automatic classification of obstacles need to be found. In addition, experiments for identifying the parking space and acquiring the size of the parking space, as well as obstacles still need to be conducted. Overall, this research provided a new solution to recognize obstacles for automatic parking. The future research will also greatly help the development of this technology.