Obstacle Avoidance of Multi-Sensor Intelligent Robot Based on Road Sign Detection

The existing ultrasonic obstacle avoidance robot only uses an ultrasonic sensor in the process of obstacle avoidance, which can only be avoided according to the fixed obstacle avoidance route. Obstacle avoidance cannot follow additional information. At the same time, existing robots rarely involve the obstacle avoidance strategy of avoiding pits. In this study, on the basis of ultrasonic sensor obstacle avoidance, visual information is added so the robot in the process of obstacle avoidance can refer to the direction indicated by road signs to avoid obstacles, at the same time, the study added an infrared ranging sensor, so the robot can avoid potholes. Aiming at this situation, this paper proposes an intelligent obstacle avoidance design of an autonomous mobile robot based on a multi-sensor in a multi-obstruction environment. A CascadeClassifier is used to train positive and negative samples for road signs with similar color and shape. A multi-sensor information fusion is used for path planning and the obstacle avoidance logic of the intelligent robot is designed to realize autonomous obstacle avoidance. The infrared sensor is used to obtain the environmental information of the ground depression on the wheel path, the ultrasonic sensor is used to obtain the distance information of the surrounding obstacles and road signs, and the information of the road signs obtained by the camera is processed by the computer and transmitted to the main controller. The environment information obtained is processed by the microprocessor and the control command is output to the execution unit. The feasibility of the design is verified by analyzing the distance acquired by the ultrasonic sensor, infrared distance measuring sensors, and the model obtained by training the sample of the road sign, as well as by experiments in the complex environment constructed manually.


Introduction
With the development of artificial intelligence technology, mobile robots are widely used in intelligent factories, modern logistics, security, precision agriculture and other aspects [1][2][3][4]. Wheeled mobile robots have been widely used in storage and transportation fields. The focus of this research is to avoid obstacles in complex environments while the path is optimal. The most important thing to realize the autonomous motion control of a mobile robot is to obtain the information of the surrounding environment and transfer it to the main controller to convert it into control command, so as to ensure that the robot can safely and stably avoid all obstacles while moving to the destination, which can be achieved when the mobile robot has a strong perception system. Different types of sensors are required for different information, including proprioceptive sensors that measure information such as joint angle and wheel speed; and exteroceptive sensors that sense external data such as sound, light and distance [5][6][7][8][9][10]. The sensing technologies of mobile robots include passive sensing based on multiple cameras, stereo vision and infrared cameras and active sensing using lidar and sonar sensors to detect dynamic or stationary obstacles in real time [11]. Laser ranging is used to analyze the wheel skid of the four-wheel sliding steering mobile robot. Some other studies have proposed target tracking of wheeled mobile robots based on visual methods [12,13].
For an unknown environment, sensors are usually used for intelligent obstacle avoidance and path planning. The early method of obstacle avoidance and path planning is to detect the stickers on the ground by infrared ray for navigation. This method can only be used in a known environment. In Ref. [14] Jiang et al. [15] utilized six ultrasonic sensors to capture relative information about of ambient wheeled robots and to identify a parking space for automatic parking. In 1995, Yuta and Ando [16] installed ultrasonic sensors on the front of a robot and in various locations on the left and right sides. In Refs. [17,18] multiple ultrasonic data were used to create the map of the surrounding environment or establish the surface shape of obstacles.
At present, the research on an obstacle avoidance robot is mostly about the motor driving principle, motor speed regulation scheme and ranging principle, and the research on obstacle avoidance is also about obstacle avoidance. Few people study mobile robots when they encounter pits during automatic travel. In this paper, ultrasonic sensor information, infrared distance measuring sensors information and camera information are fused. After solving the above problems, the function of road sign recognition is also introduced, which allows the mobile robot to make accurate movement based on the traffic sign information.

Establishing Kinematic Model
The object of this paper is a wheeled sliding steering mobile robot, which is driven independently by four symmetrical wheels and has both rolling and sliding in the movement process, and its mechanical structure is simple and has high flexibility. The car body has no steering gear and relies on changing the left and right wheel speeds to make the wheels skid, achieving a different radius steering or even zero radius steering. However, due to the nonlinear, time-varying, multi-variable and strong coupling characteristics of the system, its motion is more uncertain than that of the car with a steering device. So, it is necessary to build a kinematic model for the system instead of using a simple kinematic model to represent its motion characteristics. Figure 1 shows the operational model of the robot studied in this study. Without considering the mass transfer, the following assumptions are made for the model: The vehicle body is completely symmetrical and the center of mass coincides with the center of assembly; 2.
The car moves in plane motion.
We established the fuselage coordinate system Ob and the ground coordinate system XOY, where the fuselage coordinate system moves with the vehicle. In fuselage coordinates system: In the process of moving, the relative sliding between the mobile robot and the ground is inevitable. We used the slip rate to describe the wheel slip and its calculation formula is: w i is the rotational speed of wheel i; v ix is the x-direction partial velocity of the center of wheel i. When s i > 0, that is, the wheel linear velocity is greater than the wheel center velocity, the frictional force between the wheels and the ground is the driving force, and then the slip occurs. When s i < 0, that is, the wheel linear velocity is less than the wheel center velocity, the frictional force between the wheels and the ground is the braking force and slippage occurs.
When s i = 0, that is, the linear speed of the wheel is equal to the speed of the wheel center, the robot is in a complete rolling state. We established the fuselage coordinate system Ob and the ground coordinate system XOY, where the fuselage coordinate system moves with the vehicle. In fuselage coordinates system: In the process of moving, the relative sliding between the mobile robot and the ground is inevitable. We used the slip rate to describe the wheel slip and its calculation formula is: is the rotational speed of wheel i; is the x-direction partial velocity of the center of wheel i. When > 0, that is, the wheel linear velocity is greater than the wheel center velocity, the frictional force between the wheels and the ground is the driving force, and then the slip occurs.
When < 0, that is, the wheel linear velocity is less than the wheel center velocity, the frictional force between the wheels and the ground is the braking force and slippage occurs. When = 0, that is, the linear speed of the wheel is equal to the speed of the wheel center, the robot is in a complete rolling state.
We defined the side near the center of rotation as the inside side and the other side as the outside side. When the robot turns, the inner wheel slips and the outer wheel slips. The instantaneous center of the contact point between the wheels on both sides and the ground is equal to the y coordinate of the rotating center.
When the robot rotates, the longitudinal velocity at the center of the same-side wheel is equal: Because the wheels on the same side rotate at the same speed, so: We defined the side near the center of rotation as the inside side and the other side as the outside side. When the robot turns, the inner wheel slips and the outer wheel slips. The instantaneous center of the contact point between the wheels on both sides and the ground is equal to the y coordinate of the rotating center.
When the robot rotates, the longitudinal velocity at the center of the same-side wheel is equal: Because the wheels on the same side rotate at the same speed, so: Formula (4) shows the relationship between linear velocity, attitude angular velocity and left and right wheel rotation speed: v = w l r+w r r 2 .
Since there is sliding, the linear velocity is represented by the longitudinal velocity of the wheel center, while the lateral velocity can be represented by the sideslip angle, so formula (5) can be obtained.
Through coordinate system transformation, the kinematics equation of the mobile robot in the ground coordinate system XOY can be expressed by formula (6).
The meanings of parameters in the formula are shown in Table 1.

System Structure of the Mobile Robot
The control system of the mobile robot is composed of the power module, main controller, industrial personal computer (IPC), detection module and drive module. The power supply module is responsible for the energy supply of the whole system. Since the voltage of each module is different, the voltage and regulation are performed by the up/down module. The voltage of the ultrasonic sensor and the infrared distance measuring sensors need only be provided by the controller. The detection module comprises an ultrasonic sensor, an infrared distance measuring sensor and a camera. The detection module is mainly used for detecting the surrounding environment, and feeds the detected environment information back to the main controller. The camera data shall be first handed over to the IPC, and then fed back to the main controller after processing by the IPC. The main controller processes the environment information. The drive module consists of a motor driver as well as an encoder motor. The motor driver controls the speed of the encoder motor. The motor encoder detects the speed of the motor and feeds it back to the driver for closed-loop control. The main controller is responsible for the information processing of the system using the Arduino Mega 2560. The detection system detects that the environment information is fed back to the main controller, and the main controller processes the information. The driving system is driven according to the environment information to control the movement speed and attitude of the robot, so as to avoid obstacles in the range of activity. Figure 2 shows the system composition of the mobile robot. the environment information is fed back to the main controller, and the main controller processes the information. The driving system is driven according to the environment information to control the movement speed and attitude of the robot, so as to avoid obstacles in the range of activity. Figure 2 shows the system composition of the mobile robot.

Detection System
In the research of obstacle avoidance of a mobile robot, the processing of the surrounding environment information is especially important. The environment is dynamic and unknown in real life. At the same time, in some environments, there are signs that require the robot to move in the specified direction. It is important that the robot moves safely to its destination in a complex location. By selecting appropriate sensors to collect and analyze environmental information, the robot can realize the above functions. In this design, the HC-SR04 ultrasonic sensor, GP2YA02 and USB driver-free camera are selected for the components of the detection system.

Sensor Layout
In order to make the robot work normally in both static and dynamic environments, nine ultrasonic sensors, two infrared distance measuring sensors, and a USB driverless camera were installed on the robot body. An ultrasonic sensor was used to detect obstacle information of the surrounding bulge; an infrared distance measuring sensor was positioned between two wheels in front of the bottom wheel for detecting the ground pit; a camera was used to detect the road sign information. The information detected by the sensor is transmitted to the main controller for processing, and a command is sent to the motor driver to control the robot for corresponding movement. The sensor layout of the mobile robot is shown in Figure 3.

Detection System
In the research of obstacle avoidance of a mobile robot, the processing of the surrounding environment information is especially important. The environment is dynamic and unknown in real life. At the same time, in some environments, there are signs that require the robot to move in the specified direction. It is important that the robot moves safely to its destination in a complex location. By selecting appropriate sensors to collect and analyze environmental information, the robot can realize the above functions. In this design, the HC-SR04 ultrasonic sensor, GP2YA02 and USB driver-free camera are selected for the components of the detection system.

Sensor Layout
In order to make the robot work normally in both static and dynamic environments, nine ultrasonic sensors, two infrared distance measuring sensors, and a USB driverless camera were installed on the robot body. An ultrasonic sensor was used to detect obstacle information of the surrounding bulge; an infrared distance measuring sensor was positioned between two wheels in front of the bottom wheel for detecting the ground pit; a camera was used to detect the road sign information. The information detected by the sensor is transmitted to the main controller for processing, and a command is sent to the motor driver to control the robot for corresponding movement. The sensor layout of the mobile robot is shown in Figure 3.

Sample Pretreatment
The training sample is divided into a positive sample and negative sample, the positive sample is a road sign sample picture and the negative sample is any other picture. In this paper, 1000 positive samples and 2000 negative samples were selected, and samples were grayed and normalized to 128 × 72 gray scale as to form a training sample set, so as to avoid different pictures to calculate a different number of features. The picture shown in Figure 4 is the sample of the three road signs to be trained on separately. Figure 5 is the picture of the negative sample.

Sample Pretreatment
The training sample is divided into a positive sample and negative sample, the positive sample is a road sign sample picture and the negative sample is any other picture. In this paper, 1000 positive samples and 2000 negative samples were selected, and samples were grayed and normalized to 128 × 72 gray scale as to form a training sample set, so as to avoid different pictures to calculate a different number of features. The picture shown in Figure 4 is the sample of the three road signs to be trained on separately. Figure 5 is the picture of the negative sample.

CascadeClassifier Training Based on Adaboost
The Adaboost algorithm is an adaptive boosting algorithm. The basic idea of Adaboost is to use weak classifier and sample space of different weight distribution to build a strong classifier [19][20][21][22]. The Adaboost algorithm synthesizes a strong CascadeClassifier with a strong classification ability by superposing a large number of simple CascadeClassifiers with general classification ability. A strong CascadeClassifier is formed by selecting weak CascadeClassifiers with the best resolution performance and the least error. The principle is to carry out T cycle iteration, select an optimal and weak CascadeClassifier each time, then update the sample weight, reduce the weight of correctly resolved samples, and increase the weight of incorrectly resolved samples. The specific algorithm is as follows [23][24][25]: Step1: given a set of data sets for training:

Sample Pretreatment
The training sample is divided into a positive sample and negative sample, the positive sample is a road sign sample picture and the negative sample is any other picture. In this paper, 1000 positive samples and 2000 negative samples were selected, and samples were grayed and normalized to 128 × 72 gray scale as to form a training sample set, so as to avoid different pictures to calculate a different number of features. The picture shown in Figure 4 is the sample of the three road signs to be trained on separately. Figure 5 is the picture of the negative sample.

CascadeClassifier Training Based on Adaboost
The Adaboost algorithm is an adaptive boosting algorithm. The basic idea of Adaboost is to use weak classifier and sample space of different weight distribution to build a strong classifier [19][20][21][22]. The Adaboost algorithm synthesizes a strong CascadeClassifier with a strong classification ability by superposing a large number of simple CascadeClassifiers with general classification ability. A strong CascadeClassifier is formed by selecting weak CascadeClassifiers with the best resolution performance and the least error. The principle is to carry out T cycle iteration, select an optimal and weak CascadeClassifier each time, then update the sample weight, reduce the weight of correctly resolved samples, and increase the weight of incorrectly resolved samples. The specific algorithm is as follows [23][24][25]: Step1: given a set of data sets for training:

Sample Pretreatment
The training sample is divided into a positive sample and negative sample, the positive sample is a road sign sample picture and the negative sample is any other picture. In this paper, 1000 positive samples and 2000 negative samples were selected, and samples were grayed and normalized to 128 × 72 gray scale as to form a training sample set, so as to avoid different pictures to calculate a different number of features. The picture shown in Figure 4 is the sample of the three road signs to be trained on separately. Figure 5 is the picture of the negative sample.

CascadeClassifier Training Based on Adaboost
The Adaboost algorithm is an adaptive boosting algorithm. The basic idea of Adaboost is to use weak classifier and sample space of different weight distribution to build a strong classifier [19][20][21][22]. The Adaboost algorithm synthesizes a strong CascadeClassifier with a strong classification ability by superposing a large number of simple CascadeClassifiers with general classification ability. A strong CascadeClassifier is formed by selecting weak CascadeClassifiers with the best resolution performance and the least error. The principle is to carry out T cycle iteration, select an optimal and weak CascadeClassifier each time, then update the sample weight, reduce the weight of correctly resolved samples, and increase the weight of incorrectly resolved samples. The specific algorithm is as follows [23][24][25]: Step1: given a set of data sets for training:

CascadeClassifier Training Based on Adaboost
The Adaboost algorithm is an adaptive boosting algorithm. The basic idea of Adaboost is to use weak classifier and sample space of different weight distribution to build a strong classifier [19][20][21][22]. The Adaboost algorithm synthesizes a strong CascadeClassifier with a strong classification ability by superposing a large number of simple CascadeClassifiers with general classification ability. A strong CascadeClassifier is formed by selecting weak CascadeClassifiers with the best resolution performance and the least error. The principle is to carry out T cycle iteration, select an optimal and weak CascadeClassifier each time, then update the sample weight, reduce the weight of correctly resolved samples, and increase the weight of incorrectly resolved samples. The specific algorithm is as follows [23][24][25]: Step 1: given a set of data sets for training: where x i is the input training sample images, y i is the result of classification and y i ∈ [0, 1] is 1 sample, 0 means a negative sample; Step 2: specifies the number of loop iterations; Step 3: initializes the weight of the sample: where d(i) is the distribution probability used to initialize the strictly impossible; Step 4: t = 1, 2, . . . , T (T is the number of training times, which determines the number of final weak CascadeClassifiers): (1): Initialization weight: (2): The initial sample is trained by a learning algorithm to obtain a weak CascadeClassifier.
(3): The error rate of each weight under the current weight is found: the CascadeClassifier with the smallest error rate from the obtained weak CascadeClassifier is selected and added to the strong CascadeClassifier (4): Weight: If the sample of i is classified correctly: Otherwise: where: (5): After passing the T wheel, the strong CascadeClassifier obtained is: where Figure 6 shows the identification flow chart. The whole process can be divided into two steps: training and identification. In the training process, the Haar features are used to extract the features of a large number of road sign samples, and then the Adaboost algorithm is used to select the effective features to form the CascadeClassifier. In the recognition process, the key features of the samples to be identified are extracted first, and then the features and the trained CascadeClassifier are used for road sign recognition.

Obstacle Avoidance Strategy for Mobile Robots
In this study, the obstacle avoidance is mainly realized in the following two situations, namely, obstacle avoidance for a ground bulging obstacle and obstacle avoidance for a ground sag obstacle. Figure 7 shows the schematic diagram of obstacles and obstacle avoidance routes in this study. We made the following assumptions about obstacles: 1. Ground raised obstacles and a ground pit will only meet the conditions shown in Figure 8;

Obstacle Avoidance Strategy for Mobile Robots
In this study, the obstacle avoidance is mainly realized in the following two situations, namely, obstacle avoidance for a ground bulging obstacle and obstacle avoidance for a  Figure 7 shows the schematic diagram of obstacles and obstacle avoidance routes in this study. We made the following assumptions about obstacles:

1.
Ground raised obstacles and a ground pit will only meet the conditions shown in Figure 8; 2.
Ground raised obstacles and a ground pit do not appear simultaneously; 3.
The road sign is present on a raised obstacle; 4.
The ground pit width is less than the wheel spacing of the robot; 5.
There is a stop road sign at the destination that prompts stop.

Obstacle Avoidance Strategy for Mobile Robots
In this study, the obstacle avoidance is mainly realized in the following two situations, namely, obstacle avoidance for a ground bulging obstacle and obstacle avoidance for a ground sag obstacle. Figure 7 shows the schematic diagram of obstacles and obstacle avoidance routes in this study. We made the following assumptions about obstacles: 1. Ground raised obstacles and a ground pit will only meet the conditions shown in Figure 8; 2. Ground raised obstacles and a ground pit do not appear simultaneously; 3. The road sign is present on a raised obstacle; 4. The ground pit width is less than the wheel spacing of the robot; 5. There is a stop road sign at the destination that prompts stop.   When the robot avoids obstacles, it needs to leave a certain space so that the robot can turn safely. Since the length and width of the robot are both 70 cm, the distance between its rotating center and the furthest point is about 50 cm. The safe distance is set as 60 cm because the robot has deviation when rotating. When the distance measured by the sensor in the front is equal to 60 cm, it indicates that there is an obstacle in front. At this time, open the camera to detect whether there is a road sign. If the road sign is detected, the obstacle should be avoided in the direction indicated by the type of road sign. If no  if the ultrasonic sensor detects an obstacle, stop and start the camera to judge whether there is a road sign indication; if there is a road sign indication, avoid the obstacle according to the road sign indication; if there is no road sign, avoid the obstacle autonomously according to the built-in program; if the infrared distance measuring sensors detect a ground pit, use the built-in program to perform the corresponding movement of avoiding the ground pit.
When the robot avoids obstacles, it needs to leave a certain space so that the robot can turn safely. Since the length and width of the robot are both 70 cm, the distance between its rotating center and the furthest point is about 50 cm. The safe distance is set as 60 cm because the robot has deviation when rotating. When the distance measured by the sensor in the front is equal to 60 cm, it indicates that there is an obstacle in front. At this time, open the camera to detect whether there is a road sign. If the road sign is detected, the obstacle should be avoided in the direction indicated by the type of road sign. If no road sign is detected, turn off the camera and perform obstacle avoidance. The reason cameras are used only when obstacles are detected is that road signs are fixed to the surface of obstacles on the ground, and because there is no other way of ranging other than by ultrasonic sensors, the cameras cannot determine the distance of road signs once they are detected. Therefore, the ultrasonic sensor detects the obstacle and determines the distance of the obstacle, and then turns on the camera to determine whether there is a road sign on the obstacle and the distance of the road sign.
The obstacle avoidance movement of raised obstacles on the ground is as follows: when the distance of obstacles detected by the ultrasonic sensor on the front side is 60 cm, the left and right ultrasonic waves start to detect for obstacles. If the distance measured by the ultrasonic on the left is greater than that measured by the ultrasonic sensor on the right, it turns to the left. At this time, the speed of the left wheel and the speed of the right wheel are reversed and the left wheel reverses. If the distance measured by the ultrasonic on the right is greater than that measured by the ultrasonic sensor on the left, it turns to the right. At this time, the speed of the left wheel is in reverse with that of the right wheel, and the left wheel is turning positively. After successful turning, the robot drives forward until it is out of the range of the obstacle. At this point, the robot turns in the opposite direction of the previous turning direction, and then continues driving and finally leaves the obstacle. When the robot leaves the obstacle range, the detection value of the ultrasonic sensor on the side of the robot is greater than 60 cm.
The obstacle avoidance movement of the ground pits is as follows: the distance between the infrared sensor and the ground is 6 cm, and the distance between the chassis and the ground is 3 cm, so the safe distance is less than 9 cm, which is set to 8 cm in this experiment. So, the distance detected by the infrared ranging sensor is greater than 8 cm to avoid the past. When the detection distance of the left infrared sensor is greater than or equal to 8 cm, the left wheel slows down, and the right wheel accelerates to the left to avoid the pit. When the detection distance of the right infrared sensor is greater than or equal to 8 cm, the right wheel slows down, and the right wheel accelerates to the right to avoid the pit. When pits are detected on both sides, the vehicle stops and waits for manual movement. Figure 9 is the physical prototype used in the experiment, and Table 2 is the mark in the physical prototype diagram. The prototype includes a vehicle body, drive assembly, detection assembly and control assembly. The drive assembly includes a drive member for driving a wheel to rotate relative to a vehicle body and a wheel. A plurality of detection assemblies is connected to the vehicle body, a portion of the detection assemblies (ultrasonic sensors) are used to detect obstacles around the vehicle body, a portion of the detection assemblies (infrared sensors) are used to detect obstacles at the lower end of the vehicle body, and a portion of the detection assemblies (cameras) are used to detect road sign information. The control assembly is connected to a drive member and a detection assembly to receive a signal from the detection assembly and control the drive member to drive a wheel to turn when the detection assembly detects an obstacle to avoid the obstacle. detection assembly and control assembly. The drive assembly includes a drive member for driving a wheel to rotate relative to a vehicle body and a wheel. A plurality of detection assemblies is connected to the vehicle body, a portion of the detection assemblies (ultrasonic sensors) are used to detect obstacles around the vehicle body, a portion of the detection assemblies (infrared sensors) are used to detect obstacles at the lower end of the vehicle body, and a portion of the detection assemblies (cameras) are used to detect road sign information. The control assembly is connected to a drive member and a detection assembly to receive a signal from the detection assembly and control the drive member to drive a wheel to turn when the detection assembly detects an obstacle to avoid the obstacle.   Figure 10 is an ultrasonic sensor. The HC-SR04 ultrasonic ranging module can provide 2 cm-450 cm non-contact ranging function, ranging accuracy up to 3 mm; the module includes an ultrasonic transmitter, receiver and control circuit.   Figure 10 is an ultrasonic sensor. The HC-SR04 ultrasonic ranging module can provide 2 cm-450 cm non-contact ranging function, ranging accuracy up to 3 mm; the module includes an ultrasonic transmitter, receiver and control circuit. The ultrasonic module has four pins:, Trig (control end), Echo (receiving end), GND; VCC and GND are connected to 5 V power supply, Trig (control end) controls the ultrasonic signal sent, and Echo (receiving end) receives the reflected ultrasonic signal. Figure 11 is the principle of ultrasonic sensor ranging. The ultrasonic sensor ranging is based on the reflection characteristics of the ultrasonic sensor. The transmitter end of The ultrasonic module has four pins:, Trig (control end), Echo (receiving end), GND; VCC and GND are connected to 5 V power supply, Trig (control end) controls the ultrasonic signal sent, and Echo (receiving end) receives the reflected ultrasonic signal. Figure 11 is the principle of ultrasonic sensor ranging. The ultrasonic sensor ranging is based on the reflection characteristics of the ultrasonic sensor. The transmitter end of the ultrasonic sensor emits a beam of ultrasonic wave, and at the same time it starts timing, and the ultrasonic wave is transmitted in the medium at the same time. Because sound waves have reflective properties, they bounce back when they encounter obstacles. When the receiving end of the ultrasonic sensor receives the reflected ultrasonic wave back, it stops the timing. The propagation medium in this study is air, and the propagation speed of sound in air is 340 m/s. According to the recorded time t, the distance S between the launching position and the obstacle can be calculated according to the formula S = 340 t/2. The ultrasonic module has four pins:, Trig (control end), Echo (receiving end), GND; VCC and GND are connected to 5 V power supply, Trig (control end) controls the ultrasonic signal sent, and Echo (receiving end) receives the reflected ultrasonic signal. Figure 11 is the principle of ultrasonic sensor ranging. The ultrasonic sensor ranging is based on the reflection characteristics of the ultrasonic sensor. The transmitter end of the ultrasonic sensor emits a beam of ultrasonic wave, and at the same time it starts timing, and the ultrasonic wave is transmitted in the medium at the same time. Because sound waves have reflective properties, they bounce back when they encounter obstacles. When the receiving end of the ultrasonic sensor receives the reflected ultrasonic wave back, it stops the timing. The propagation medium in this study is air, and the propagation speed of sound in air is 340 m/s. According to the recorded time t, the distance S between the launching position and the obstacle can be calculated according to the formula S = 340 t/2. Figure 11. Ranging principle. Figure 12 is a sequence diagram of an ultrasonic sensor. As shown in Figure 13, a pulse trigger signal of more than 10 needs to be provided, for 8 cycle levels of 40 kHz to Figure 11. Ranging principle. Figure 12 is a sequence diagram of an ultrasonic sensor. As shown in Figure 13, a pulse trigger signal of more than 10 needs to be provided, for 8 cycle levels of 40 kHz to be emitted inside the module and the echo to be detected. Once the echo signal is detected, the echo signal is output. The pulse width of the echo signal is proportional to the measured distance. The formula for calculating distance can be obtained from the time interval between the transmitting signal and receiving echo signal: distance = high level time * sound speed/2. In order to avoid the influence of the transmitting signal on a recall signal, the measurement period is above 60 ms. be emitted inside the module and the echo to be detected. Once the echo signal is detected, the echo signal is output. The pulse width of the echo signal is proportional to the measured distance. The formula for calculating distance can be obtained from the time interval between the transmitting signal and receiving echo signal: distance = high level time * sound speed/2. In order to avoid the influence of the transmitting signal on a recall signal, the measurement period is above 60 ms. An ultrasonic sensor was used to measure the value of distance from the object. The test distance and actual distance of ultrasonic sensor obtained through multiple experiments are shown in Table 3. Table 3. Actual Distance and test distance.   Figure 14 shows the experimental effect diagram of detecting the actual road sign using the camera and the model after training. Figure 15 shows the actual environment of the detection experiment, and the background of the actual detection environment is not pure color, which can meet the requirements of the robot. In Figure 16, from left to right, and from top to bottom, the test distances are 10 cm, 20 cm, 30 cm, 40 cm, 50 cm, 60 cm, 70 cm, 80 cm, 90 cm, and 100 cm. In this study, experiments were conducted on road signs at different distances. The test results show that in the detection environment where the camera is 20 cm away from the road sign and 25 cm away from the road sign, the road sign cannot all be in the field of view due to the influence of the camera parameters, it is only partially in the field of view. Tables 5 and 6 are the experimental data of road sign recognition. From these two tables, it can be concluded that the success rate of recognition is lower when the distance between the camera and the road sign is less than 25 cm. The success rate of recognition is higher when the distance between the camera and the road sign is greater than or equal to 30 cm, reaching an average of 99.625%, which can meet the requirements of accurate obstacle avoidance. An ultrasonic sensor was used to measure the value of distance from the object. The test distance and actual distance of ultrasonic sensor obtained through multiple experiments are shown in Table 3. In order to improve the accuracy of the distance value measured by the sensor, use MATLAB to fit the curve of each distance value in the table with the least square method, and the fitting curve is y = ax + b, where a = 1.0171, b = 0.262, as shown in Figure 13. In Figure 13 the longitudinal axis shows the measured value, and the horizontal axis shows the actual values, the unit is cm. Table 4 shows the actual value and the value after fitting. It can be seen from the table that the error is very small  Figure 14 shows the experimental effect diagram of detecting the actual road sign using the camera and the model after training. Figure 15 shows the actual environment of the detection experiment, and the background of the actual detection environment is not pure color, which can meet the requirements of the robot. In Figure 16, from left to right, and from top to bottom, the test distances are 10 cm, 20 cm, 30 cm, 40 cm, 50 cm, 60 cm, 70 cm, 80 cm, 90 cm, and 100 cm. In this study, experiments were conducted on road signs at different distances. The test results show that in the detection environment where the camera is 20 cm away from the road sign and 25 cm away from the road sign, the road sign cannot all be in the field of view due to the influence of the camera parameters, it is only partially in the field of view. Tables 5 and 6 are the experimental data of road sign recognition. From these two tables, it can be concluded that the success rate of recognition is lower when the distance between the camera and the road sign is less than 25 cm. The success rate of recognition is higher when the distance between the camera and the road sign is greater than or equal to 30 cm, reaching an average of 99.625%, which can meet the requirements of accurate obstacle avoidance.        command to control the positive and negative movement of the dual motor. Figure 17 is the motor speed curve after PID adjustment. It can be observed from the figure that when the speed of the motor increases from 0 to the maximum speed suddenly, the overshoot is very small and the curve reaches dynamic equilibrium in a very short time. Figure 16. Motor driver. Figure 16. Motor driver. 6.3. Physical Test Figure 16 represents a neurons intelligent PID motor drive module with a built-in controller capable of PID computation, trapezoidal control, and dc motor movement driven by a drive circuit on the circuit board. Through the serial port it can send 8 B of command to control the positive and negative movement of the dual motor. Figure 17 is the motor speed curve after PID adjustment. It can be observed from the figure that when the speed of the motor increases from 0 to the maximum speed suddenly, the overshoot is very small and the curve reaches dynamic equilibrium in a very short time.  Figure 17 is the motor speed curve after PID adjustment. It can be observed from the figure that when the speed of the motor increases from 0 to the maximum speed suddenly, the overshoot is very small and the curve reaches dynamic equilibrium in a very short time.   Figure 18 shows the artificially constructed experimental environment, including the environment of obstacles and road signs. The obstacle avoidance experiment of the physical prototype was conducted in the constructed experimental environment, and the experimental results are shown in Figure 19 The experimental results show that this method can successfully identify road signs and realize autonomous obstacle avoidance in complex environments. Figure 20 shows the real-time picture of road sign detection (not the experimental environment as shown in Figure 19).  Figure 18 shows the artificially constructed experimental environment, including the environment of obstacles and road signs. The obstacle avoidance experiment of the physical prototype was conducted in the constructed experimental environment, and the experimental results are shown in Figure 19 The experimental results show that this method can successfully identify road signs and realize autonomous obstacle avoidance in complex environments. Figure 20 shows the real-time picture of road sign detection (not the experimental environment as shown in Figure 19).

Conclusions
In the physical prototype experiment, the mobile robot can pass through the narrow gap between obstacles stably and safely, and can run correctly according to the direction indicated by the road signs and finally reach the given destination position. The

Conclusions
In the physical prototype experiment, the mobile robot can pass through the narrow gap between obstacles stably and safely, and can run correctly according to the direction indicated by the road signs and finally reach the given destination position. The

Conclusions
In the physical prototype experiment, the mobile robot can pass through the narrow gap between obstacles stably and safely, and can run correctly according to the direction indicated by the road signs and finally reach the given destination position. The

Conclusions
In the physical prototype experiment, the mobile robot can pass through the narrow gap between obstacles stably and safely, and can run correctly according to the direction indicated by the road signs and finally reach the given destination position. The experimental results verify the feasibility of the design, the accuracy of the road sign detection and obstacle avoidance. The method for information fusion of multiple sensors can not only make up for the error generated by a single sensor, but also sense the information of multi-directional and multi-type obstacles of the robot at this moment and realize the obstacle avoidance function. Therefore, it can be widely used in mobile robot systems.