Next Article in Journal
Research on Vehicle Active Steering Stability Control Based on Variable Time Domain Input and State Information Prediction
Next Article in Special Issue
An Improved Cellular Automata Traffic Flow Model Considering Driving Styles
Previous Article in Journal
Evaluation of Thermal Conductivity of Sustainable Concrete Having Supplementary Cementitious Materials (SCMs) and Recycled Aggregate (RCA) Using Needle Probe Test
Previous Article in Special Issue
Car-Following Model Optimization and Simulation Based on Cooperative Adaptive Cruise Control
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Parking Space Status Recognition Method Based on Computer Vision

1
College of Transportation Engineering, Nanjing Tech University, Nanjing 211816, China
2
Jiangsu Branch, CIECC Urban Construction Design Co., Ltd., Nanjing 210012, China
*
Authors to whom correspondence should be addressed.
Sustainability 2023, 15(1), 107; https://doi.org/10.3390/su15010107
Submission received: 3 October 2022 / Revised: 16 December 2022 / Accepted: 19 December 2022 / Published: 21 December 2022
(This article belongs to the Special Issue Sustainable Road Transport System Planning and Optimization)

Abstract

:
To improve the utilization rate of parking space resources and reduce the cost of installing and maintaining sensor recognition, this paper proposed an improved computer vision-based parking space status recognition method. The overall recognition accuracy was improved by graying the video, filtering smoothing noise reduction, image enhancement pre-processing, introducing texture feature extraction method based on LBP operator, improving the background difference method, and then, we used a perceptual hash algorithm to calculate the Hamming distance between the background image and the hash string of the current frame of the video, excluding the influence of light and pedestrian on recognition accuracy. Finally, a parking space status recognition system is developed relied on the Python environment, and parking spaces are recognized in three environmental states: with direct light, without direct light, and in rain and snow. The overall average accuracy of the experimental results was 97.2%, which verifies the accuracy of the model.

1. Introduction

At present, with the rapid development of computer technology and communication technology, intelligent parking through computers, sensors, and other technical means instead of manual labor is gradually becoming a reality. The parking status identification system is created to improve the ease of use and management efficiency of the car park. Related scholars have used a variety of non-video state recognition and video detection methods for intelligent parking [1]. Today, video detection is more commonly used for car park detection [2]. At this stage, some indoor car parks monitor empty parking spaces by installing sensors such as geomagnetic, ultrasonic, and microwave [3,4,5,6,7,8,9,10]. This method can only detect one parking space per device, requiring a large number of devices and a high cost.
To address the shortcomings of sensor detection methods, scholars have researched image-based intelligent parking space recognition. The image-based detection algorithm focuses on both the detection target as well as the change in status of the parking space by comparing the images before and after parking by combining references to markings, moving objects (cars), distance, and light in the car park. Tschentscher et al. [11]. Trained the color features and Gaussian difference features of parking spaces separately by support vector machine classifier (SVM) to verify the real-time status of parking spaces, and the classifier recognition accuracy was up to 99.96%; Xu et al. [12] proposed a color vision-based parking space recognition method to identify the marking lines of parking spaces through the trained neural network; Jung et al. [13] developed a semi-automatic parking space recognition system by Hough transformation of bird’s-eye view images to develop a semi-automatic parking space status recognition system; Wan Tingting et al. [14,15] used Fisher’s linear discriminant function for the recognition of parking spaces; Meng Yan et al. [16] used a manual calibration method to determine the detection area and automatically selected detection points within each parking space area and judged the status of parking spaces according to the changes in the corresponding image information of the detection points; Seo M G et al. [17] proposed an automatic parking recognition system based on deep learning. After learning a variety of parking lot images, the system has a high accuracy for parking status recognition. Xu L et al. [18] used TensorFlow to train the vehicle target recognition model and realized the accurate judgment of the orderly numbering and parking vacancy of the vehicle distribution recognition results. Ma S et al. [19] proposed a new similarity calculation formula to identify staining features through the template matching algorithm. Then, according to the relative position relationship between features and parking spaces, the status of the parking space was identified. Zhang J [20] proposed an improved parking recognition method combining rough extraction and fine matching, which improved the accuracy of parking recognition. Li Yongyi et al. [21] proposed to improve artificial potential field method to complete automatic driving trajectory planning, and introduced invasive weed algorithm to solve the defects of traditional artificial potential field method. Huang C et al. [22] designed a parking extraction method based on the connected region, which further simplified parking extraction and image processing. Fintzel et al. [23] and Vestri et al. [24] identified parking spaces from the perspective of stereo ranging; Li Yongyi. Et al. [25]. Improved the estimation accuracy of model parameters by generalized recursive least squares (GRLS). Jiang et al. [26]. Proposed a parking space detection method using local binary patterns (LBP) to extract texture features of parking space images and a mean shift algorithm to segment the images with an accuracy rate of more than 98%; Almeida et al. [27] designed a parking space status detection system that relied on texture features, and texture feature descriptors were studied in a targeted manner.
In summary, a single camera can detect multiple target parking spaces at the same time, allowing for effective use of existing equipment in the parking lot, and the image-based parking space status recognition method has significant advantages over the higher cost of sensor recognition. However, vehicle detection algorithms that rely on vehicle movement are susceptible to interference from pedestrians, lighting, and other factors, and cannot guarantee more accurate state detection, while detection algorithms that depend on image features have more complex calculation steps and require a large amount of sample data for classifier training in the early stages, which has certain limitations in use. For this reason, this paper employed an improved computer vision-based method for recognizing the status of car parking spaces. The overall recognition accuracy was improved by graying the video, filtering smoothing noise reduction, image enhancement pre-processing, introducing texture feature extraction method depending on the LBP operator, improving the background difference method, and then using a perceptual hashing algorithm to calculate the Hamming distance between the background base image and the hash string of the current frame of the video to exclude the influence of light and pedestrian on recognition accuracy. Finally, the Python environment was used to develop a parking space recognition system, and the validity of the model was verified by identifying parking spaces in three environmental conditions: with direct light, without direct light, and in rain and snow.

2. Image Pre-Processing

Computer vision refers to machine vision that uses cameras and computers instead of the human eye to identify, track, and measure targets, and further undertakes graphics processing to make images more suitable for human eye observation or transmission to instruments for inspection. As car parks are susceptible to external factors such as uneven light exposure conditions, coupled with the camera’s imaging quality, imaging methods and resolution, and other equipment factors, which can cause image noise pollution, resulting in lower image quality. Therefore, to improve the overall recognition accuracy, the image must be pre-processed, specifically including gray-scaling [28], filtering smoothing noise reduction, and image enhancement.
Due to the different sensitivity of the human eye to different colors, the values of the three components of red, green, and blue were weighted and averaged according to different weights, which were used as the gray value of the gray image, and the sensitivity of these three colors to the human eye from highest to lowest are green, red, and blue respectively, the specific expressions are as follows:
G r a y ( i , j ) = 0.299 × R ( i , j ) + 0.587 × G ( i , j ) + 0.114 × B ( i , j )
where
  • G r a y ( i , j ) is the gray value of the image at coordinates ( i , j ) ;
  • R ( i , j ) is the red component at that point;
  • G ( i , j ) is the green component at that point;
  • B ( i , j ) is the blue component at the point.
Noise reduction is applied to images using filter smoothing [29]. Median filtering is a noise reduction method that smoothens a pixel by sorting the gray values of the points in the pixel and a certain area around it and selecting the median value as the gray value of that point. The specific expressions are as follows:
g ( x , y ) = m e d i a n   f ( i , j ) , ( i , j ) S
where
  • g ( x , y ) is the gray value at the point ( x , y ) of the output image;
  • S is a neighborhood at the point ( x , y ) of the original image;
  • ( i , j ) is a pixel point within the neighborhood S , and f ( i , j ) is its gray value.
To obtain better features and visual effects, the processed image can be enhanced utilizing histogram equalization. The grayscale histogram is a statistical relationship between the frequency of each grayscale in an image, its horizontal coordinate represents the grayscale value and its vertical coordinate is the frequency of the gray scale value in the image. If the histogram pixels are concentrated in low gray areas, the image will be darker overall; if the pixels are concentrated in high gray areas, the image will be brighter overall; if the pixels cover almost all the gray levels and are evenly distributed overall, the image will show a higher contrast and clearer details because of the large range of gray levels. Histogram equalization is the process of achieving a uniform distribution of gray-scale probability density [30]. Figure 1 shows the histogram equalization process compared to the gray scale histogram.
As Figure 1 depicts, due to the number of pixels in the high gray areas of the processed image has increased significantly, the brightness of the image display has increased, in Figure 1c, the background brightness in the figure has become higher, and the contrast between the car parking area and the background is particularly obvious.

3. Improved Background Difference Method

3.1. Principle of Improved Background Difference Method

The background difference method determines whether a parking space is occupied by comparing the gray value difference between the captured image and the background image. However, traditional algorithms have two drawbacks: first, the algorithm is sensitive to changes in the environment, for example, when the lighting changes, the grey value of the image will also change, resulting in incorrect status judgment; second, it cannot exclude the influence of interfering factors such as pedestrians, which can easily detect pedestrians as vehicles.
This paper presents an improvement in the background difference method: by relying on extracting image features, the principle of the background difference method is used, and the LBP operator is introduced to discriminate the similarity between image and background to solve the interference between pedestrians and light in the recognition process of the background difference method. Figure 2 and Figure 3 display the flow chart before and after the improvement.

3.2. LBP Operator

The image features are extracted using a texture extraction method depending on the LBP operator, which proceeds as follows.
Step 1. Detection of image gray-scaling.
Step 2. The LBP values are obtained and the texture feature values are calculated for each pixel using the LBP operator, which is shown in Equations (3) and (4) as follows:
L B P ( x , y ) = 1 N s [ I ( n ) I ( c ) ] × 2 n 1
where
  • N is the number of sampling points;
  • I ( n ) is the gray value of the “n” pixel;
  • I ( c ) is the gray value of the central pixel.
The function “s(x)” has the following formula:
s ( x ) = { 0 , x < 0 1 , x 0
Step 3. The edge pixel points retained their original pixel grayscale values; the output off-bit LBP texture maps for different states are shown in Figure 4.
The LBP operator used in this paper was a 3 × 3 template with a total of eight sampling points. The sampled points were compared with the central gray value, and the value greater than the latter was taken as 1; the opposite is 0 and decimalized. When converting the comparison values to binary numbers, it is important to note that each pixel of the image should be calculated using the same starting position and rotation direction.
To investigate the effect of illumination on the algorithm’s results, it is necessary to analyze the resistance of LBP texture images to changes in light. The LBP algorithm is based on the principle and calculation steps of the LBP operator, which shows that the LBP feature value of an output image pixel is not directly related to its own gray value size, but to the relationship between the gray value of the point and the surrounding pixels. Consequently, when the lighting conditions change, the image texture feature map does not change as the gray value of the whole area of the image changes (i.e., the LBP texture map is more robust to changes in lighting, as shown in Figure 5).

3.3. Image Similarity Calculation

Based on the LBP feature map, which uses the perceptual hashing algorithm (pHash) method to calculate the Hamming distance between the background image and the hash string of the current frame of the video (note: the Hamming distance is the number of bits in two strings of the same length (64 bits) with different values at the same position. For example, if there are two strings “111000” and “110100”, and the third and fourth digits are different, the Hamming distance between these two strings is 2, and the similarity formula is (6 − 2) ÷ 6 ≈ 0.667). With the algorithm computing speed block, the algorithm recognition results are more accurate. Its calculation process is as follows.
Step 1. Graying of images;
Step 2. Reduce the size of the image, resize the image and generally reduce the size to 32 × 32, you can remove the high-frequency part;
Step 3. Obtain the 32 × 32 DCT coefficient matrix of the reduced image by the discrete cosine transform (DCT);
Step 4. Retain the 8 × 8 matrix in the top left corner and calculate its DCT;
Step 5. Calculate the image hash value. Each value in the reserved 8 × 8 DCT coefficient matrix is compared to the DCT mean, with values greater than the mean being recorded as “1”, and values less than the mean being recorded as “0”. This is combined into a 64-bit hash of each image retaining the 8 × 8 matrix in the top left corner and calculating its DCT;
Step 6. Calculate the Hamming distance between the two images, the higher the value, the lower the similarity.
To exclude the influence of pedestrians on the recognition of parking space status, the parking space image is divided into nine equal parts for similarity calculation. As Figure 6 describes, pedestrians cannot appear in three different parts of a diagonal of the parking space image at the same time, there must be parts of the three parts that are similar to the empty parking space image, while this situation does not occur when there is a car in the parking space, as the three parts will be occupied at the same time.
Thus, the following solution is proposed for pedestrian interference, and the algorithm flow is shown in Figure 7.
Step 1. Dividing the car parking image texture feature map into nine equal parts;
Step 2. Take three parts of one diagonal line and calculate the Hamming distance between them and the background image respectively;
Step 3. Comparison of Hamming distance and threshold P;
Step 4. Whether pedestrian interference is judged, when all three parts are less than the threshold P, the space is judged to be occupied, otherwise the space is marked as vacant.

4. Parking Space Status Identification Method

4.1. Parking Space Status Identification Method

Based on the improved background difference method, combined with vehicle characteristics, the similarity between the current feature image and the background feature image is calculated by the perceptual hashing algorithm (pHash) method, and the similarity index (Hamming distance) is used to determine the status of the parking space, and the proposed parking space status recognition method is as follows.
Step 1. Preparation for processing. After acquiring the video, the parking space area is marked according to the parking situation shown in the video, and later state recognition will detect only the marked area;
Step 2. Select the background image when there is empty, and after pre-processing and feature extraction, calculate the “fingerprint” (hash string) of each parking area feature of the background image, and store it for later comparison with the background image;
Step 3. Access to real-time video, extraction of the current moment image, after pre-processing, feature map extraction, calculation of the current image “fingerprint” of each parking space area;
Step 4. Through the current and background image “fingerprint” comparison, calculate the distance between the two Hamming, compare the threshold value to determine the status, less than the threshold value directly marked as empty parking spaces, greater than the threshold value into whether the pedestrian and other factors affect the judgment stage;
Step 5. Determine if the pedestrian factors are affecting the space, if yes, record as empty, if no, mark as occupied;
Step 6. Displaying information on the status of all parking spaces in the parking lot.
The specific flow of the parking space status identification method is shown in Figure 8.

4.2. Status Discrimination Threshold

As the image similarity changes drastically when the parking space has a car, the illumination factor does not have a large impact on the parking space status judgment, so it is mainly necessary to consider whether the illumination change will affect the idle parking space recognition. To verify the resistance of the texture feature map to illumination, this paper classified and collected videos with different lighting conditions at different times of the day for the car park of Chen Yi Square, Jiangpu Campus of the Nanjing University of Technology. The specific classifications are as follows.
Condition one: No direct light, between the hours of 6.00 and 8.00;
Condition two: Weak light, sunny days between 9.00 and 11.00 and 15.00 and 17.00 h;
Condition three: Strong light, sunny days between 12.00 and 14.00 h.
One hundred cases were taken from each of the three conditions, and after pre-processing and LBP texture map extraction for these cases, the Hamming distances were calculated by the perceptual hashing algorithm with the corresponding regions in the background map without illumination. The results are presented in Figure 9.
As Figure 9 demonstrates, the main range of the Hamming distance distribution between the vacant spaces and the background map is 16–25 under different lighting conditions, and changes in lighting do not significantly alter the range of changes in Hamming distance.
To determine the size of the threshold Y at the time of the algorithm design and to accurately identify the state of the parking space, it is also necessary to know the fluctuation range of the Hamming distance when the parking space has a car. The results of the calculation are shown in Figure 10.
Figure 9 and Figure 10 display that the indicator Y takes a value range of 26–31 when the parking space can be distinguished as idle or possibly occupied, and this paper set the Y value to 30, consequently, when the Hamming distance D was less than 30, the parking space was judged to be idle, and vice versa, the parking space was judged to be possibly occupied, and entered the pedestrian interference exclusion stage. It is known that the maximum idle parking space Hamming distance is 25, hence, the threshold value P for the pedestrian interference part is determined as 25, when the Hamming distance of parts 1, 2, and 3 shown in Figure 7 are greater than 25, the state is judged to be occupied, otherwise, it is judged to be pedestrian interference and marked as an empty parking space. The threshold values are determined as follows:
Y { < 30 ,   P a r k i n g   s p a c e   i s   v a c a n t 30 ,   P a r k i n g   s p a c e s   a r e   o c c u p i e d
P i { < 25 ,   P e d e s t r i a n   i n t e r f e r e n c e 25 ,   P a r k i n g   s p a c e s   a r e   o c c u p i e d ,   i = 1 , 2 , 3

5. Model Validation

This paper used Python 3.5 and the OpenCV 3.4.2 environment to implement the algorithm and examine the recognition results using its excellent computational and image processing capabilities, and the PYQT library was chosen for visualization.
Combining the traffic characteristics and the actual management requirements, the visual display interface designed to realize the corresponding functions is shown in Figure 11.
Area one is the section of the drawing area and marked parking space, after clicking on it, you can select the vehicle parking map and draw a rectangular area in the picture according to the actual vehicle parking position to mark the parking space position; area two is the section of real-time video display, where stored videos can be played; area three is the section of marked parking space status information display, which can be compared with the original video on the left side to observe the accuracy of detection results; area four is the section of the statistical display of parking space status information in area three, which shows the number of parking spaces, the number of used parking spaces, the number of unused parking spaces, and the number of unused parking spaces.

5.1. Study Subjects

The background selected in this paper is the image of the parking lot in Chenyi Square, Jiangpu Campus, Nanjing University of Technology when there is no light, as Figure 12 depicts, and the parking position is calibrated. After the parking spaces are marked, the parking lot status part will display the location of each parking space and the number of parking spaces in the drawing order. The specific number and parking space status information will be displayed in the corresponding box after importing the video, as Figure 13 displays.
The car park numbers and coordinate areas marked on the plan are shown in Table 1.

5.2. Status Identification

As Figure 14 demonstrates, to facilitate the comparison between the actual situation and the detection results, the marked parking spaces in the video display part of area two were set to be displayed in a green box, and the green font inside the box is the parking space number; in the status detection part of area three, the red box corresponds to the location of different parking spaces, and the contents of the display are the parking number and parking status, in which Y is displayed when the parking space is occupied, and the font color is red; N is displayed when the parking space is vacant, and the font color is green.
The detection interval was set to 10 s (i.e., every 10 s a frame is extracted for recognition, and the updated information is displayed statistically). This paper intercepted the car park recognition status about eight minutes after the initial state recognition for display; at this time, the car parking space 1, 2, 3, 6, and 7 are in an occupied state, and the rest is an idle state.
As can be seen from the actual conditions in the area two section of the diagram, pedestrians passed by in the area of space 9, but at this time, the area three results showed that part of space 9 was still recorded as a free space and there was no situation where pedestrians passing by a free space were detected as occupied by a vehicle. From the detection results, it is clear that the designed interference exclusion method can effectively exclude the influence of pedestrians.
Figure 15 shows that parking spaces 4, 5, and 7 in area two were affected by light and generate shadows, space 10 was affected by pedestrians, and the remaining six spaces had cars and were exposed to direct light. Since the algorithm extracted texture data, it was not affected by the pedestrians, shadows, and bright light exposure, and still accurately identified the four spaces of 4, 5, 7, and 10 as empty; as shown in Figure 16, the rain and snow, and changing the background image, also accurately identified spaces 5 and 8 as empty. Hence, it can be concluded that the algorithm is still able to recognize parking spaces when the lighting and weather conditions change.

5.3. Analysis of Experimental Results

Through the experiments on the three different working conditions of the parking space state recognition, relying on the fact that the computer vision parking space state recognition method can accurately identify no direct light, direct light, rain, and snow, whether the three conditions of the parking space are idle, to test the detection performance of the system, after the recognition of all the collected video, the statistical test results are shown in Table 2.
As Table 2 demonstrates, under the three working conditions of no direct light irradiation, direct light irradiation, and rain and snow, the correct rate of recognition of the car parking status recognition system was 98.6%, 96.5%, and 95%, respectively; among them, under the conditions of no direct light irradiation and direct light irradiation, the main reason for the detection error was that there were more students when going to and coming from class, and multiple pedestrians passed through the car parking area at the same time; in rain and snow, the main reason for the detection error was that when snow accumulates on the top of the vehicle, some of the image texture information was lost, causing the Hamming distance to become smaller. With a small difference in the amount of data tested, the average correct rate was 97.2% in all three cases.

6. Conclusions

Parking status recognition is of great importance to the intelligent management of parking lots. It cannot only enhance the user parking experience but also improve the overall turnover rate of the parking lot and alleviate the problem of difficult urban parking. This paper focused on the state recognition of parking lots based on computer vision, and our specific conclusions are as follows:
  • Grayscale the video, filter smoothing noise reduction, and image enhancement pre-processing to improve the overall recognition accuracy, introduce texture feature extraction method depending on the LBP operator and improve the background difference method. Then, the perceptual hashing algorithm is used to calculate the Hamming distance between the background image and the hash string of the current frame of the video to exclude the influence of light and pedestrians on the recognition accuracy.
  • Develop a parking space status recognition system relied on the Python environment and identify parking spaces in three environmental states: with direct light, without direct light, and in rain and snow. The overall average accuracy of the experimental results was 97.2%, which proves that the accuracy of the model is excellent.
  • The parking status recognition method based on computer vision designed in this paper effectively solves the problem of collecting parking status information in the guidance process of the parking guidance system to achieve the ultimate goal of the efficient information management of parking lot.
  • In this paper, the authors only put forward the errors caused by the three influencing factors of weather, illumination, and pedestrian in the detection of parking status, but we did not take into account the necessary considerations of irregular parking methods (such as occupying two parking spaces) and vehicle types, so we will take these aspects into consideration in the next study.

Author Contributions

Conceptualization, Y.L.; Methodology, H.M. and Y.L.; Software, W.Y.; Validation, X.Z.; Formal analysis, H.M.; Investigation, S.G.; Resources, Y.L.; Data curation, W.Y.; Writing—original draft preparation, Y.L. and H.M.; writing—review and editing, X.Z.; visualization, S.G.; Supervision, H.M.; Funding acquisition, Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation Project of China (Grant No. 51878349).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

The authors are grateful and thank all those who have helped to improve this paper during the research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, C.J.; Li, G.D. The Study of a City Smart Parking Mode Based on “Internet Plus”. Bull. Surv. Mapp. 2017, 11, 58. [Google Scholar]
  2. Liu, X.L. Intelligent features and outlook of GIS in the era of “Internet Plus”. Sci. Surv. Mapp. 2017, 42, 1. [Google Scholar]
  3. Yan, F.; Li, C. Research and Design of the Parking Management System Based on Internet of Things Technology. Comput. Sci. Appl. 2017, 7, 526. [Google Scholar]
  4. Wang, M.; Wang, G.; Bao, B.H. Parking space detection based on ultrasonic and loop detector. Inf. Technol. 2016, 54. [Google Scholar] [CrossRef]
  5. Zhao, Y.; Zhao, S.P.; Chen, L.Y. Application of Wireless Parking Space Detection System. Tianjin Sci. Technol. 2017, 44, 60. [Google Scholar]
  6. Zhao, Z.Q.; Chen, Y.R.; Yi, W.D. Design of wireless vehicle detector based on AMR sensor. Electron. Meas. Technol. 2013, 1, 2. [Google Scholar]
  7. Zhang, Z.; Tao, M.; Yuan, H. A Parking Occupancy Detection Algorithm Based on AMR Sensor. Sens. J. 2014, 15, 1261–1269. [Google Scholar] [CrossRef]
  8. Shi, X.Y.; Xu, B.; Yu, G.L.; Long, W. Research and implementation of an intelligent guidance system for car parks based on infrared detection. Pract. Electron. 2013, 43–44. [Google Scholar] [CrossRef]
  9. Suhr, J.K.; Jung, H.G. Sensor Fusion-based Vacant Parking Slot Detection and Tracking. IEEE Trans. Intell. Transp. Syst. 2013, 15, 21–36. [Google Scholar] [CrossRef]
  10. Jiang, H.B.; Ye, H.; Ma, S.D.; Chen, L. High Precision Identification of Parking Slot in Automated Parking System Based on Multi-Sensor Data Fusion. J. Chongqing Univ. Technol. (Nat. Sci.) 2019, 33, 1. [Google Scholar]
  11. Tschentscher, M.; Koch, C.; König, M.; Salmen, J.; Schlipsing, M. Scalable Real-time Parking Lot Classification: An Evaluation of Image Features and Supervised Learning Al-gorithms. In Proceedings of the International Joint Conference on Neural Networks (IJCNN), Killarney, Ireland, 12–17 July 2015; p. 1. [Google Scholar]
  12. Xu, J.; Chen, G.; Xie, M. Vision-Guided Automatic Parking for Smart Car. In Proceedings of the Intelligent Vehicles Symposium, Dearborn, MI, USA, 3–5 October 2000; p. 725. [Google Scholar]
  13. Jung, H.G.; Kim, D.S.; Yoon, P.J.; Kim, J. Parking Slot Markings Recognition for Automatic Parking Assist System. In Proceedings of the Intelligent Vehicles Symposium, Tokyo, Japan, 13–15 June 2006; p. 106. [Google Scholar]
  14. Wan, T.T. Survey on Video-based Parking Cell Detection Methods. In Proceedings of the Seventh National Conference on Information Acquisition and Processing, Guilin, China, 6 August 2009. [Google Scholar]
  15. Wan, T.T.; Jiang, D.L. Parking Cell Detection Method Based on KL and Kernel Fisher Discriminant. Comput. Eng. 2011, 37, 204. [Google Scholar]
  16. Meng, Y.; Sun, J.; Tang, Y.P. Research on Parking State Detection Method Based on Machine Vision. Comput. Meas. Control 2012, 20, 638. [Google Scholar]
  17. Seo, M.G.; Ohm, S.Y. An Automatic Parking Space Identification System using Deep Learning Techniques. J. Converg. Cult. Technol. 2021, 7, 635–640. [Google Scholar]
  18. Xu, L.; Chen, X.; Ban, Y.; Huang, D. Method for Intelligent Detection of Parking Spaces Based on Deep Learning. Chin. J. Lasers 2019, 46, 0404013. [Google Scholar]
  19. Ma, S.; Fang, W.; Jiang, H.; Han, M.; Li, C. Parking space recognition method based on parking space feature construction in the scene of autonomous valet parking. Appl. Sci. 2021, 11, 2759. [Google Scholar] [CrossRef]
  20. Zhang, J.; Liu, T.; Yin, X.; Wang, X.; Zhang, K.; Xu, J.; Wang, D. An improved parking space recognition algorithm based on panoramic vision. Multimed. Tools Appl. 2021, 80, 18181–18209. [Google Scholar] [CrossRef]
  21. Li, Y.; Yang, W.; Zhang, X.; Kang, X.; Li, M. Research on Automatic Driving Trajectory Planning and Tracking Control Based on Improvement of the Artificial Potential Field Method. Sustainability 2022, 14, 12131. [Google Scholar] [CrossRef]
  22. Huang, C.; Yang, S.; Luo, Y.; Wang, Y.; Liu, Z. Visual Detection and Image Processing of Parking Space Based on Deep Learning. Sensors 2022, 22, 6672. [Google Scholar] [CrossRef]
  23. Fintzel, K.; Bendahan, R.; Vestri, C.; Bougnoux, S.; Kakinami, T. 3D Parking Assistant System. In Proceedings of the Intelligent Vehicles Symposium, Parma, Italy, 14–17 June 2004; p. 881. [Google Scholar]
  24. Vestri, C.; Bougnoux, S.; Bendahan, R.; Fintzel, K.; Wybo, S.; Abad, F.; Kakinami, T. Evaluation of a Vision-Based Parking Assistance System. In Proceedings of the Intelligent Transportation Systems, Vienna, Austria, 16 September 2005; p. 131. [Google Scholar]
  25. Li, Y.; Zhang, M.; Ding, Y.; Zhou, Z.; Xu, L. Real-Time Travel Time Prediction Based on Evolving Fuzzy Participatory Learning Model. J. Adv. Transp. 2022, 2022, 2578480. [Google Scholar] [CrossRef]
  26. Wang, L.; Jiang, D. A Method of Parking Space Detection Based on Image Segmentation and LBP. In Proceedings of the International Conference on Multimedia Information Network and Security, Nanjing, China, 2–4 November 2012; p. 229. [Google Scholar]
  27. Almeida, P.; Oliveira, L.S.; Silva, E.; Britto, A.; Koerich, A. Parking Space Detection using Textural Descriptors. In Proceedings of the Systems, Man, and Cybernetics (SMC), Manchester, UK, 13–16 October 2013; p. 3603. [Google Scholar]
  28. Zhen, Y.L.; Chen, W.B. Study on Image Windows Median Filter Algorithm. Netw. New Media Technol. 2011, 32, 9. [Google Scholar]
  29. Yang, J.; Deng, R.F.; Wang, X.P. Illumination Preprocessing Algorithm of Face Based on Image Guided Filtering. Comput. Eng. 2014, 40, 182. [Google Scholar]
  30. Jiang, B.J.; Zhong, M.X. Improved histogram equalization algorithm in the image enhancement. Laser Infrared 2014, 44, 702. [Google Scholar]
Figure 1. Straight equalization processing contrast: (a) original grayscale; (b) original gray scale histogram; (c) equalization processes; (d) histogram of grayscale after processing.
Figure 1. Straight equalization processing contrast: (a) original grayscale; (b) original gray scale histogram; (c) equalization processes; (d) histogram of grayscale after processing.
Sustainability 15 00107 g001
Figure 2. Flow chart of background subtraction method.
Figure 2. Flow chart of background subtraction method.
Sustainability 15 00107 g002
Figure 3. The principle diagram of the improved background subtraction method.
Figure 3. The principle diagram of the improved background subtraction method.
Sustainability 15 00107 g003
Figure 4. LBP texture map of disembarkation bits in different states.
Figure 4. LBP texture map of disembarkation bits in different states.
Sustainability 15 00107 g004
Figure 5. Pixel point LBP value light change resistance principle diagram.
Figure 5. Pixel point LBP value light change resistance principle diagram.
Sustainability 15 00107 g005
Figure 6. Comparison of the occupancy of parking spaces for pedestrians and vehicles.
Figure 6. Comparison of the occupancy of parking spaces for pedestrians and vehicles.
Sustainability 15 00107 g006
Figure 7. Flowchart of the pedestrian interference elimination method.
Figure 7. Flowchart of the pedestrian interference elimination method.
Sustainability 15 00107 g007
Figure 8. Flowchart of the state recognition algorithm.
Figure 8. Flowchart of the state recognition algorithm.
Sustainability 15 00107 g008
Figure 9. Hamming distance range of free parking spaces under different lighting conditions.
Figure 9. Hamming distance range of free parking spaces under different lighting conditions.
Sustainability 15 00107 g009
Figure 10. The variation range in hamming distance when parking space is available.
Figure 10. The variation range in hamming distance when parking space is available.
Sustainability 15 00107 g010
Figure 11. Visual interface display.
Figure 11. Visual interface display.
Sustainability 15 00107 g011
Figure 12. Example validation background diagram.
Figure 12. Example validation background diagram.
Sustainability 15 00107 g012
Figure 13. Draw a schematic diagram of the area marked parking space.
Figure 13. Draw a schematic diagram of the area marked parking space.
Sustainability 15 00107 g013
Figure 14. Display diagram of the pedestrian interference identification information.
Figure 14. Display diagram of the pedestrian interference identification information.
Sustainability 15 00107 g014
Figure 15. Display diagram of identification information under light conditions.
Figure 15. Display diagram of identification information under light conditions.
Sustainability 15 00107 g015
Figure 16. Display diagram of identification information in rain and snow weather.
Figure 16. Display diagram of identification information in rain and snow weather.
Sustainability 15 00107 g016
Table 1. Parking area coordinate table.
Table 1. Parking area coordinate table.
No.Top LeftBottom Right
XYXY
1425169479244
2334372399513
3402372465513
4467373535512
5535373605513
6485170532242
7536166597246
8367170424245
9598165660249
10608373685513
Table 2. The test results.
Table 2. The test results.
ProjectTotal Number of Car SpacesChange of StatusNo. of ErrorsAccuracy
No direct light14539298.6%
With direct light8531396.5%
Rain and snow weather607395%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, Y.; Mao, H.; Yang, W.; Guo, S.; Zhang, X. Research on Parking Space Status Recognition Method Based on Computer Vision. Sustainability 2023, 15, 107. https://doi.org/10.3390/su15010107

AMA Style

Li Y, Mao H, Yang W, Guo S, Zhang X. Research on Parking Space Status Recognition Method Based on Computer Vision. Sustainability. 2023; 15(1):107. https://doi.org/10.3390/su15010107

Chicago/Turabian Style

Li, Yongyi, Hongye Mao, Wei Yang, Shuang Guo, and Xiaorui Zhang. 2023. "Research on Parking Space Status Recognition Method Based on Computer Vision" Sustainability 15, no. 1: 107. https://doi.org/10.3390/su15010107

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop