1. Introduction
Automation technology has progressively advanced in the 21st century, profoundly improving the quality of people’s life and work. The automatic driver assistance system in vehicles is a hot topic in the contemporary automobile industry [
1,
2,
3]. These systems offer the compelling advantages of reduced manpower, cost savings, and increased economic benefits. In addition, the development of electric vehicles has brought about substantial reductions in carbon emissions and an increased emphasis on environmental friendliness [
4,
5,
6,
7]. Small- and medium-sized computers and control boards are the indispensable key components of the automatic driver assistance system. The mini-computer and the control board receive the environmental data for calculations through different sensors, such as a camera, ultrasonic sensor, infrared, and LiDAR [
8,
9,
10,
11]. The computed results subsequently serve as navigational instructions for electric self-driving vehicles (ESDVs), ensuring safe and stable operation. If the ESDV detects obstacles or equipment failure, it will automatically stop the operations and promptly notify maintenance personnel to remove the road obstacles.
Figure 1 displays a schematic diagram of the components in the automatic driver assistance system for electric vehicles, encompassing the control, sensing, and input/output devices. The system includes sensors (camera, ultrasonic sensor), a personal computer, a miniature computer (Jetson Nano Kit), a control board (AT-Mega 1284p module), input devices (keyboard, joystick, and I/O modules), and output devices (DC motor, servo motor, LED display, and horn). The sensor transmits information to the miniature computer for calculation, while the miniature computer sends instructions to the control board. The control board governs various output devices that help the ESDV run on the right path. Additionally, the LED display and horn sounds in the output device inform other road users about the prevailing driving conditions through vision and sound, thus fostering a safer driving environment and reducing the likelihood of vehicular accidents.
In recent years, numerous scholars and technicians have developed automatic driver assistance systems for vehicles. Zhang et al. introduced an image processing technology for the open-source computer vision library (Open CV). They collected lane data using a camera, extracted crucial features, and monitored traffic signal signs and other variables to control ESDVs to ensure their correct and stable operation on roadways [
12]. Das et al. proposed a camera-based road detection and developed an ESDV lane change safety trajectory and acceleration and deceleration control, thus humanizing ESDVs’ road behavior [
13]. Saranya et al. used a camera to capture the front lane of the vehicle and the surrounding objects, subsequently deploying a convolutional neural network (CNN) for deep learning to obtain the most suitable ESDV driving trajectory in strong light, as well as in dark and wet environments, and ensuring obstacle alerts for enhanced safety [
14]. Afor et al. explored the use of Open CV to perform image recognition processing combined with the Firebase database to allow vehicle computers to issue control commands through image processing and data analysis. This way, self-driving vehicles can run stably and arrive at their destination safely, thus promoting stable and secure autonomous driving [
15]. Chaitra et al. introduced the integration of deep learning technology (such as CNN and the Hough transform algorithm) with Open CV in self-driving vehicles and sensors to detect lanes, lane objects, and traffic signals. The results revealed that the automatic driver assistance system controls the stability and safety of ESDVs [
16]. Chen introduced a cruise control and lane change control technology for self-driving vehicles, focusing on vehicle steering control, acceleration, and deceleration control based on PID, which rapidly converges control signals to ensure the stability of ESDVs [
17]. Kannapiran et al. discussed the multifaceted use of LiDAR, a camera, an ultrasonic sensor, and various sensory inputs to track lanes and lane objects. Real-time data transmission to control terminals through vision and deep learning algorithms facilitated the issuance of precise control commands that enable the vehicle to reliably and safely arrive at its destination [
18]. Sai et al. explored the use of CNN combined with Open CV and the You Only Look Once (YOLO) algorithm to detect moving objects in the lane so that self-driving vehicles can clearly master environmental conditions, enabling stable and secure lane navigation [
19]. Yogitha et al. introduced the use of CNN and the sparse structure learning algorithm (SSLA) to analyze the images detected by self-driving vehicle sensors. The proposed algorithm could accurately identify fuzzy, dark, and sharp objects, helping self-driving vehicles identify the real environment and drive safely to their destination [
20]. Nguyen et al. proposed a self-driving vehicle with lane tracking and a GPS navigation system, where the camera can identify the lane and the moving objects. The GPS navigation system can position the ESDV and correctly drive it to the destination, thus, improving the reliability and safety of the ESDV [
21].
This study introduces the automatic driver assistance system for ESDV, characterized by a main camera for lane tracking and the novel image recognition curve-fitting (IRCF) control strategy, ensuring the ESDV drives steadily in the lane. Meanwhile, the secondary camera identifies the lane markings, while the ultrasonic sensor detects obstacles ahead of and behind the ESDV to avoid a collision. The combination of the LED display and the horn can convey the real-time operational status to the road users through vision and sound (if there is an obstacle, the lane is marked with a signal), making the driving environment more friendly and reducing vehicle accidents. The ESDV has a real-time monitoring technology that sends images to the cloud, allowing the researchers to control and view the vehicle’s driving status to facilitate control parameter adjustment and maintenance, and improve the system’s stability.
3. Proposed Image Recognition Curve-Fitting Control Technology
An advanced image recognition curve-fitting (IRCF) technology has been proposed in this study, which is depicted in
Figure 9.
Figure 9a illustrates that the main camera captured the actual image of the lane, with an image identification range of 60 cm on the
x-axis and 50 cm on the
y-axis.
Figure 9b displays the two white lines on the road captured by the miniature computer using HSV technology (as shown in
Figure 4). These white lines were then transformed into two groups of red frame lines through image processing. Subsequently,
Figure 9c demonstrates the transformation of these frame lines into two red lines using the proposed curve-fitting technology.
Figure 9d displays the corresponding diagram of the two actual white lines and the red lines on the road. The ESDV continuously analyzed the transformation of the two white lines on the road into red lines, stably controlling the driving within the two red lines.
Figure 10 further elucidates the transformation of the two white lines on the road into two groups of red frame lines.
Figure 10 corresponds directly to
Figure 9b.
Figure 10a shows the red box line on the left, denoting the five points (
P1–
P5), along with a dashed line representing curve fitting, where the red box was generated through image processing transformation (as shown in
Figure 9b). The five points in this interval were first divided into equal proportions:
P1,
P2,
P3,
P4, and
P5. These points were then determined by calculating the length of D via image processing and a miniature computer, then taking the intermediate value. Lastly, the curve-fitting technique yielded the plotted blue dashed lines.
Figure 10b shows the red box line on the right, the five points (
P1–
P5), and the dashed line fitted into the curve, where the red box was generated through image processing transformation. After curve fitting, the ESDV adhered to this defined range between the two lines, simplifying and optimizing the computational load on the miniature computer.
Figure 11 provides an overview of the actual road configuration utilized in our research for ESDV navigation. This circular road exhibits a maximum length and width of 2.4 m, and the road width is 60 cm. In this context, the ESDV operates counterclockwise, with the main camera capable of detecting a width of 60 cm and a forward detection distance of 50 cm. The proposed IRCF control strategy initially divided the left white line into five points (
P1,
P2,
P3,
P4, and
P5) and then performed the curve fitting (as shown in
Figure 12a). Subsequently, the IRCF control strategy extended to the right white line, following the same procedure of segmenting it into five points (
P1,
P2,
P3,
P4, and
P5) and performing curve fitting (as depicted in
Figure 12b). In addition, the ESDV in this study operated in a circular lane, and the researchers randomly placed the ESDV in the lane. If the ESDV is running counterclockwise, the left white line’s zero-point position at the
x-axis is 60 cm, and at the
y-axis is 0 cm (as in
Figure 12a), while the right white line’s zero-point position at the
x-axis is 120 cm, and at the
y-axis is 0 cm (as in
Figure 12b). The IRCF control strategy starts from the zero point, and the maximum measurable length of the
y-axis is 50 cm (as in
Figure 9a). The ESDV detected the white lines at both ends of the lane through the proposed IRCF control strategy (as shown in
Figure 9) and then performed curve-fitting estimation to control the ESDV in the lane operation, so the ESDV continuously ran and calculated these five points’ (
P1,
P2,
P3,
P4, and
P5) positions.
Figure 12 shows a schematic diagram illustrating the curve fitting of the two white lines of the ESDV running road. The ESDV in this study followed a circular road. First, the ESDV was analyzed in a counterclockwise running state.
Figure 12a shows the diagram of the curve fitting of the left white line. The corresponding equation is as follows:
where parameters
a = −0.066 cm
−1,
b = 4.48, and
c = −25.6 cm.
Figure 12b shows the diagram of the curve fitting of the right white line. The corresponding equation is as follows:
where parameters
d = −0.417 cm
−1,
e = 91.37, and
f = −4951 cm.
Figure 12a,b correspond to
Figure 11. The unit of both the
x- and
y-axis is cm. The two white lines were analyzed by image processing technology and then converted into 5 points (
P1,
P2,
P3,
P4, and
P5), further deriving Equations (1) and (2) by curve fitting.
Similarly, the ESDV was analyzed in a clockwise running state. The curve-fitting equation of the left white line is shown in Equation (2), while the curve-fitting equation of the right white line is shown in Equation (1). In summary, the curve fitting in this study was carried out through a road analysis, allowing the ESDV to drive on the road accurately and stably. Furthermore, this study is supported by a cloud web image monitoring technology, which allowed the researchers to adjust the parameters to achieve optimal control through real vehicle operation conditions. The IRCF technology can be extended to different road types. The advantages of the control technology are its simple structure, low cost, easy implementation, and high stability.
This study proposed to take the circular road as an example as well as the ESDV stable operation from the proposed IRCF control strategy. The proposed IRCF control strategy adopts the analysis of the two white lines of the road to take five points and perform curve fitting to allow the ESDV to operate in the lane. If the driving lane path is non-linear, the proposed IRCF control strategy will continuously analyze the two white lines on the road and let the ESDV operate in the lane. The advantage of the IRCF control strategy is that it can analyze the actual white markings on the road so that it can promptly find the lane boundary and is flexible.
Figure 13 displays the proposed ESDV control technology flowchart. The initiation sequence commenced with the activation of the two cameras, the two ultrasonic sensors, the horn, and the LED display. Then, the ESDV determined whether there were obstacles ahead or behind. The ESDV stopped if there were obstacles. The horn’s siren sound and the LED display were used to determine the presence of obstacles. Next, when there were no obstacles ahead or behind the vehicle, the main camera detected two white lines on the road, while the secondary camera detected the road signs. Lastly, the proposed IRCF performed the calculations and controlled the ESDV to drive, the LED displays to demonstrate the signs, and the miniature computer and the cloud web monitoring system to link the web page to present real-time images.
In this study, the proposed IRCF control strategy analyzed the left and right white lines, and the left and right white lines were divided into five points, respectively. Then, the road boundary was drawn by curve fitting. If the white line in a local section is blurred, it will cause deviation in the road boundary drawn by curve fitting, but the ESDV will continue to run. As soon as this short fuzzy white line section passes, the proposed IRCF control strategy will continue to analyze the white line boundary of the road, and the ESDV will quickly correct and drive on the road planned by the proposed IRCF control strategy. If the left and right white lines become dirty in a large area, the proposed IRCF control strategy cannot operate normally. First, the ESDV will stop when it encounters an obstacle. Further, the ESDV has cloud monitoring technology, which can help researchers observe the operating status of the ESDV and perform troubleshooting, etc.
4. Experimental Results
Figure 14 demonstrates the overall view of the ESDV operation. The image recognition technology was transmitted on the 2.4 m × 2.4 m ring test road after connecting to the cloud web page, allowing the researchers to effectively examine the actual ESDV operation.
Figure 14a shows the dynamic test diagram of the clockwise operation of the ESDV.
Figure 14b shows the dynamic test diagram of the ESDV counterclockwise operation. The distance of the ESDV running one circle on the ring road was 5.5 m. The characteristics and efficiency of the ESDV were compared utilizing Hough linear detection and the proposed IRCF technology, running 20 cycles for a total of 110 m.
The Hough line detection method [
15] uses image processing to analyze the two white lines on the road. Therefore, this study used the same ESDV to compare the performance of the proposed IRCF control strategy and the Hough line detection method. First, when encountering a curved road, multiple straight lines with different directions will be calculated through the Hough line detection method. This method’s calculation is complex, and the path is not smooth, thus resulting in a low ESDV operation performance. When encountering a curved road, the proposed IRCF control strategy will calculate a smooth path. This control strategy lets the ESDV operate on this road, thus demonstrating a high ESDV operation performance.
The comparison of the characteristics and effectiveness of the two control strategies are presented in
Table 2. The proposed IRCF technique outperformed the Hough linear detection method at 110 m clockwise and counterclockwise running speeds. The ESDV was run utilizing the two methods. The proposed IRCF control strategy’s run time was shorter than the Hough line detection method [
15], so the efficiency of the proposed IRCF control strategy was high. The proposed IRCF control strategy’s running dexterity is good because the curve fitting makes the lane smoother, allowing the ESDV to operate with greater flexibility and a higher speed. The proposed control technique is also superior in terms of efficiency and operating flexibility.
Figure 15 displays the physical diagram of the dynamic ESDV counterclockwise operation, and the real-time images displayed on the cloud web page. The experiment confirmed the feasibility of the proposed control system. The real-time image was then transmitted to the cloud web page so that the maintenance, research, and development personnel could obtain real-time operation. This facilitates timely control parameter adjustment and maintenance.
5. Discussion and Conclusions
The integration of image processing technology and the proposed IRCF control strategy within the ESDV self-driving assistance system has significantly enhanced its capability to operate stably, efficiently, and dexterously within the lane. The proposed strategy was compared with the Hough line detection method on a 110 m loop. The speeds of the proposed control strategy and the Hough line detection method were 2.13 km/h and 1.97 km/h, respectively. It is evident from the experimental results that the proposed control strategy outperformed the traditional Hough line detection method in terms of speed, efficiency, and running dexterity. Furthermore, the integration of ESDV with a cloud-based web page facilitated the real-time display of road conditions, as captured by its two cameras. This dynamic feature empowered the researchers to swiftly adapt the control parameters and promptly address vehicle-related issues and let the ESDV operate in the lane.
Moreover, the IRCF control strategy proved adaptable to non-linear driving lane paths. When the ESDV encountered blurred white lines in localized sections, it caused deviation in the road boundary drawn by curve fitting, but the ESDV continued its course. As soon as the short fuzzy white line segments passed, the proposed IRCF control strategy promptly recalibrated, ensuring the ESDV returned to its designated path. If the left and right white lines become dirty in a large area, the proposed IRCF control strategy cannot operate normally. Furthermore, the cloud monitoring technology of the ESDV enabled the researchers to monitor the operating status of the ESDV and troubleshoot any issues. Most conventional auxiliary control systems for self-driving cars use LiDAR for environmental analysis and stability control, incurring substantial costs and computational demands. In contrast, the IRCF control strategy demonstrated a particular aptitude for roads marked with white lane markings, such as campuses and industrial areas. This strategy not only curtailed design expenditures but also reduced the computational complexities.
Finally, the ESDV’s secondary camera has a signal sign detection function that can be shown on the LED display to inform passengers and road users of the current driving environment. In addition, the ESDV uses ultrasound to detect obstacles ahead and behind and provides alarm functions through the horn and LED display. Therefore, the ESDV control mechanism proposed in this study is closer to the practical requirements of the ESDV on the road, promoting the automatic assistance driving function to the next generation.
The IRCF control strategy can be applied to ESDV for future applications, including operation during rainy weather and humid conditions. The control parameters can be adjusted, allowing the ESDV to operate flexibly and stably during blurred image situations and on slippery roads. The proposed method can also be applied to a night test to determine whether the ESDV’s camera can still detect road images through headlamp lighting. Furthermore, the proposed control strategy may be extended to both day and night modes, allowing the automatic driving assistance system to be applied more comprehensively to the actual road environment.