Next Article in Journal
WiFi Fingerprinting Indoor Localization Based on Dynamic Mode Decomposition Feature Selection with Hidden Markov Model
Next Article in Special Issue
An End-to-End Automated License Plate Recognition System Using YOLO Based Vehicle and License Plate Detection with Vehicle Classification
Previous Article in Journal
Adaptive Strategy to Change Firing Phases of Collided Nodes in Extended-Desync TDMA-Based MANETs
Previous Article in Special Issue
Real-Time In-Vehicle Air Quality Monitoring System Using Machine Learning Prediction Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Obstacle Avoidance of Multi-Sensor Intelligent Robot Based on Road Sign Detection

1
School of Mechatronic Engineering, China University of Mining and Technology, Beijing 100089, China
2
Beijing Special Engineering and Design Institute, Beijing 100028, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2021, 21(20), 6777; https://doi.org/10.3390/s21206777
Submission received: 6 September 2021 / Revised: 29 September 2021 / Accepted: 8 October 2021 / Published: 12 October 2021

Abstract

:
The existing ultrasonic obstacle avoidance robot only uses an ultrasonic sensor in the process of obstacle avoidance, which can only be avoided according to the fixed obstacle avoidance route. Obstacle avoidance cannot follow additional information. At the same time, existing robots rarely involve the obstacle avoidance strategy of avoiding pits. In this study, on the basis of ultrasonic sensor obstacle avoidance, visual information is added so the robot in the process of obstacle avoidance can refer to the direction indicated by road signs to avoid obstacles, at the same time, the study added an infrared ranging sensor, so the robot can avoid potholes. Aiming at this situation, this paper proposes an intelligent obstacle avoidance design of an autonomous mobile robot based on a multi-sensor in a multi-obstruction environment. A CascadeClassifier is used to train positive and negative samples for road signs with similar color and shape. A multi-sensor information fusion is used for path planning and the obstacle avoidance logic of the intelligent robot is designed to realize autonomous obstacle avoidance. The infrared sensor is used to obtain the environmental information of the ground depression on the wheel path, the ultrasonic sensor is used to obtain the distance information of the surrounding obstacles and road signs, and the information of the road signs obtained by the camera is processed by the computer and transmitted to the main controller. The environment information obtained is processed by the microprocessor and the control command is output to the execution unit. The feasibility of the design is verified by analyzing the distance acquired by the ultrasonic sensor, infrared distance measuring sensors, and the model obtained by training the sample of the road sign, as well as by experiments in the complex environment constructed manually.

1. Introduction

With the development of artificial intelligence technology, mobile robots are widely used in intelligent factories, modern logistics, security, precision agriculture and other aspects [1,2,3,4]. Wheeled mobile robots have been widely used in storage and transportation fields. The focus of this research is to avoid obstacles in complex environments while the path is optimal. The most important thing to realize the autonomous motion control of a mobile robot is to obtain the information of the surrounding environment and transfer it to the main controller to convert it into control command, so as to ensure that the robot can safely and stably avoid all obstacles while moving to the destination, which can be achieved when the mobile robot has a strong perception system. Different types of sensors are required for different information, including proprioceptive sensors that measure information such as joint angle and wheel speed; and exteroceptive sensors that sense external data such as sound, light and distance [5,6,7,8,9,10]. The sensing technologies of mobile robots include passive sensing based on multiple cameras, stereo vision and infrared cameras and active sensing using lidar and sonar sensors to detect dynamic or stationary obstacles in real time [11]. Laser ranging is used to analyze the wheel skid of the four-wheel sliding steering mobile robot. Some other studies have proposed target tracking of wheeled mobile robots based on visual methods [12,13].
For an unknown environment, sensors are usually used for intelligent obstacle avoidance and path planning. The early method of obstacle avoidance and path planning is to detect the stickers on the ground by infrared ray for navigation. This method can only be used in a known environment. In Ref. [14] Jiang et al. [15] utilized six ultrasonic sensors to capture relative information about of ambient wheeled robots and to identify a parking space for automatic parking. In 1995, Yuta and Ando [16] installed ultrasonic sensors on the front of a robot and in various locations on the left and right sides. In Refs. [17,18] multiple ultrasonic data were used to create the map of the surrounding environment or establish the surface shape of obstacles.
At present, the research on an obstacle avoidance robot is mostly about the motor driving principle, motor speed regulation scheme and ranging principle, and the research on obstacle avoidance is also about obstacle avoidance. Few people study mobile robots when they encounter pits during automatic travel. In this paper, ultrasonic sensor information, infrared distance measuring sensors information and camera information are fused. After solving the above problems, the function of road sign recognition is also introduced, which allows the mobile robot to make accurate movement based on the traffic sign information.

2. Establishing Kinematic Model

The object of this paper is a wheeled sliding steering mobile robot, which is driven independently by four symmetrical wheels and has both rolling and sliding in the movement process, and its mechanical structure is simple and has high flexibility. The car body has no steering gear and relies on changing the left and right wheel speeds to make the wheels skid, achieving a different radius steering or even zero radius steering. However, due to the nonlinear, time-varying, multi-variable and strong coupling characteristics of the system, its motion is more uncertain than that of the car with a steering device. So, it is necessary to build a kinematic model for the system instead of using a simple kinematic model to represent its motion characteristics.
Figure 1 shows the operational model of the robot studied in this study. Without considering the mass transfer, the following assumptions are made for the model:
  • The vehicle body is completely symmetrical and the center of mass coincides with the center of assembly;
  • The car moves in plane motion.
We established the fuselage coordinate system Ob and the ground coordinate system XOY, where the fuselage coordinate system moves with the vehicle.
In fuselage coordinates system:
{ v G x = v G c o s α v G y = v G s i n α
In the process of moving, the relative sliding between the mobile robot and the ground is inevitable. We used the slip rate to describe the wheel slip and its calculation formula is:
s i = w i r v i x w i r × 100 %
w i is the rotational speed of wheel i;
v i x is the x-direction partial velocity of the center of wheel i.
When s i > 0 , that is, the wheel linear velocity is greater than the wheel center velocity, the frictional force between the wheels and the ground is the driving force, and then the slip occurs.
When s i < 0 , that is, the wheel linear velocity is less than the wheel center velocity, the frictional force between the wheels and the ground is the braking force and slippage occurs.
When s i = 0 , that is, the linear speed of the wheel is equal to the speed of the wheel center, the robot is in a complete rolling state.
We defined the side near the center of rotation as the inside side and the other side as the outside side. When the robot turns, the inner wheel slips and the outer wheel slips. The instantaneous center of the contact point between the wheels on both sides and the ground is equal to the y coordinate of the rotating center.
When the robot rotates, the longitudinal velocity at the center of the same-side wheel is equal:
w 1 r ( 1 s 1 ) = w 2 r ( 1 s 2 )
w 3 r ( 1 s 3 ) = w 4 r ( 1 s 4 )
Because the wheels on the same side rotate at the same speed, so:
  s 1 = s 2 = s l ,   s 3 = s 4 = s r
Formula (4) shows the relationship between linear velocity, attitude angular velocity and left and right wheel rotation speed:
{ v = w l r + w r r 2 φ ˙ = w r r w l r b
Since there is sliding, the linear velocity is represented by the longitudinal velocity of the wheel center, while the lateral velocity can be represented by the sideslip angle, so formula (5) can be obtained.
[ v G x v G y φ ˙ ] = r 2 [ 1 s l 1 s r t a n α ( 1 s l ) t a n α ( 1 s r ) 2 ( 1 s l ) b 2 ( 1 s r ) b ] [ w l w r ]
Through coordinate system transformation, the kinematics equation of the mobile robot in the ground coordinate system XOY can be expressed by formula (6).
[ X ˙ Y ˙ θ ˙ ] = [ c o s θ s i n θ 0 s i n θ c o s θ 0 0 0 1 ] [ v G x v G y φ ˙ ]
The meanings of parameters in the formula are shown in Table 1.

3. System Structure of the Mobile Robot

The control system of the mobile robot is composed of the power module, main controller, industrial personal computer (IPC), detection module and drive module. The power supply module is responsible for the energy supply of the whole system. Since the voltage of each module is different, the voltage and regulation are performed by the up/down module. The voltage of the ultrasonic sensor and the infrared distance measuring sensors need only be provided by the controller. The detection module comprises an ultrasonic sensor, an infrared distance measuring sensor and a camera. The detection module is mainly used for detecting the surrounding environment, and feeds the detected environment information back to the main controller. The camera data shall be first handed over to the IPC, and then fed back to the main controller after processing by the IPC. The main controller processes the environment information. The drive module consists of a motor driver as well as an encoder motor. The motor driver controls the speed of the encoder motor. The motor encoder detects the speed of the motor and feeds it back to the driver for closed-loop control. The main controller is responsible for the information processing of the system using the Arduino Mega 2560. The detection system detects that the environment information is fed back to the main controller, and the main controller processes the information. The driving system is driven according to the environment information to control the movement speed and attitude of the robot, so as to avoid obstacles in the range of activity. Figure 2 shows the system composition of the mobile robot.

4. Detection System

In the research of obstacle avoidance of a mobile robot, the processing of the surrounding environment information is especially important. The environment is dynamic and unknown in real life. At the same time, in some environments, there are signs that require the robot to move in the specified direction. It is important that the robot moves safely to its destination in a complex location. By selecting appropriate sensors to collect and analyze environmental information, the robot can realize the above functions. In this design, the HC-SR04 ultrasonic sensor, GP2YA02 and USB driver-free camera are selected for the components of the detection system.

4.1. Sensor Layout

In order to make the robot work normally in both static and dynamic environments, nine ultrasonic sensors, two infrared distance measuring sensors, and a USB driverless camera were installed on the robot body. An ultrasonic sensor was used to detect obstacle information of the surrounding bulge; an infrared distance measuring sensor was positioned between two wheels in front of the bottom wheel for detecting the ground pit; a camera was used to detect the road sign information. The information detected by the sensor is transmitted to the main controller for processing, and a command is sent to the motor driver to control the robot for corresponding movement. The sensor layout of the mobile robot is shown in Figure 3.

4.2. Target Detection Based on Adaboost Algorithm

4.2.1. Sample Pretreatment

The training sample is divided into a positive sample and negative sample, the positive sample is a road sign sample picture and the negative sample is any other picture. In this paper, 1000 positive samples and 2000 negative samples were selected, and samples were grayed and normalized to 128 × 72 gray scale as to form a training sample set, so as to avoid different pictures to calculate a different number of features. The picture shown in Figure 4 is the sample of the three road signs to be trained on separately. Figure 5 is the picture of the negative sample.

4.2.2. CascadeClassifier Training Based on Adaboost

The Adaboost algorithm is an adaptive boosting algorithm. The basic idea of Adaboost is to use weak classifier and sample space of different weight distribution to build a strong classifier [19,20,21,22]. The Adaboost algorithm synthesizes a strong CascadeClassifier with a strong classification ability by superposing a large number of simple CascadeClassifiers with general classification ability. A strong CascadeClassifier is formed by selecting weak CascadeClassifiers with the best resolution performance and the least error. The principle is to carry out T cycle iteration, select an optimal and weak CascadeClassifier each time, then update the sample weight, reduce the weight of correctly resolved samples, and increase the weight of incorrectly resolved samples. The specific algorithm is as follows [23,24,25]:
Step 1: given a set of data sets for training:
{ {   x 1 , y 1 } , {   x 2 , y 2 } , , {   x n , y n } } ,
where x i   is the input training sample images, y i is the result of classification and   y i [ 0 ,   1 ] is 1 sample, 0 means a negative sample;
Step 2: specifies the number of loop iterations;
Step 3: initializes the weight of the sample:
w 1 = { w 1 , 1 , w 1 , N } , w 1 , j = d ( i ) ,
where d ( i ) is the distribution probability used to initialize the strictly impossible;
Step 4: t = 1 , 2 , , T (T is the number of training times, which determines the number of final weak CascadeClassifiers):
(1): Initialization weight:
p t = p t 1 , , p t N ,
(2): The initial sample is trained by a learning algorithm to obtain a weak CascadeClassifier.
h t : X [ 0 ,   1 ] ,
(3): The error rate of each weight under the current weight is found:
ε t = i = 1 N p t , i | h t ( X i ) y i | ,
the CascadeClassifier with the smallest error rate from the obtained weak CascadeClassifier is selected and added to the strong CascadeClassifier
(4): Weight:
ω t + 1 , i = ω t , i β 1 | h t ( x i ) y i | ,
If the sample of i is classified correctly:
| h t ( x i ) y i | = 0 ,
Otherwise:
| h t ( x i ) y i | = 1 ,
where:
α t = ε t 1 ε t ,
(5): After passing the T wheel, the strong CascadeClassifier obtained is:
H ( x ) = { 1             t = 1 T α t h t ( x ) 1 2 t = 1 T α t 0                                                                                   o t h e r ,
where
α t = log 1 β t ,

4.2.3. Road Sign Identification Process

Figure 6 shows the identification flow chart. The whole process can be divided into two steps: training and identification. In the training process, the Haar features are used to extract the features of a large number of road sign samples, and then the Adaboost algorithm is used to select the effective features to form the CascadeClassifier. In the recognition process, the key features of the samples to be identified are extracted first, and then the features and the trained CascadeClassifier are used for road sign recognition.

5. Obstacle Avoidance Strategy for Mobile Robots

In this study, the obstacle avoidance is mainly realized in the following two situations, namely, obstacle avoidance for a ground bulging obstacle and obstacle avoidance for a ground sag obstacle. Figure 7 shows the schematic diagram of obstacles and obstacle avoidance routes in this study. We made the following assumptions about obstacles:
  • Ground raised obstacles and a ground pit will only meet the conditions shown in Figure 8;
  • Ground raised obstacles and a ground pit do not appear simultaneously;
  • The road sign is present on a raised obstacle;
  • The ground pit width is less than the wheel spacing of the robot;
  • There is a stop road sign at the destination that prompts stop.
Figure 8 shows a block diagram of the motion of a mobile robot. During the operation of the intelligent vehicle, first initialize the data and give the vehicle an initial forward speed; then start the ultrasonic sensor in front and the infrared distance measuring sensors; if the ultrasonic sensor detects an obstacle, stop and start the camera to judge whether there is a road sign indication; if there is a road sign indication, avoid the obstacle according to the road sign indication; if there is no road sign, avoid the obstacle autonomously according to the built-in program; if the infrared distance measuring sensors detect a ground pit, use the built-in program to perform the corresponding movement of avoiding the ground pit.
When the robot avoids obstacles, it needs to leave a certain space so that the robot can turn safely. Since the length and width of the robot are both 70 cm, the distance between its rotating center and the furthest point is about 50 cm. The safe distance is set as 60 cm because the robot has deviation when rotating. When the distance measured by the sensor in the front is equal to 60 cm, it indicates that there is an obstacle in front. At this time, open the camera to detect whether there is a road sign. If the road sign is detected, the obstacle should be avoided in the direction indicated by the type of road sign. If no road sign is detected, turn off the camera and perform obstacle avoidance. The reason cameras are used only when obstacles are detected is that road signs are fixed to the surface of obstacles on the ground, and because there is no other way of ranging other than by ultrasonic sensors, the cameras cannot determine the distance of road signs once they are detected. Therefore, the ultrasonic sensor detects the obstacle and determines the distance of the obstacle, and then turns on the camera to determine whether there is a road sign on the obstacle and the distance of the road sign.
The obstacle avoidance movement of raised obstacles on the ground is as follows: when the distance of obstacles detected by the ultrasonic sensor on the front side is 60 cm, the left and right ultrasonic waves start to detect for obstacles. If the distance measured by the ultrasonic on the left is greater than that measured by the ultrasonic sensor on the right, it turns to the left. At this time, the speed of the left wheel and the speed of the right wheel are reversed and the left wheel reverses. If the distance measured by the ultrasonic on the right is greater than that measured by the ultrasonic sensor on the left, it turns to the right. At this time, the speed of the left wheel is in reverse with that of the right wheel, and the left wheel is turning positively. After successful turning, the robot drives forward until it is out of the range of the obstacle. At this point, the robot turns in the opposite direction of the previous turning direction, and then continues driving and finally leaves the obstacle. When the robot leaves the obstacle range, the detection value of the ultrasonic sensor on the side of the robot is greater than 60 cm.
The obstacle avoidance movement of the ground pits is as follows: the distance between the infrared sensor and the ground is 6 cm, and the distance between the chassis and the ground is 3 cm, so the safe distance is less than 9 cm, which is set to 8 cm in this experiment. So, the distance detected by the infrared ranging sensor is greater than 8 cm to avoid the past. When the detection distance of the left infrared sensor is greater than or equal to 8 cm, the left wheel slows down, and the right wheel accelerates to the left to avoid the pit. When the detection distance of the right infrared sensor is greater than or equal to 8 cm, the right wheel slows down, and the right wheel accelerates to the right to avoid the pit. When pits are detected on both sides, the vehicle stops and waits for manual movement.

6. Experiment and Analysis

Figure 9 is the physical prototype used in the experiment, and Table 2 is the mark in the physical prototype diagram. The prototype includes a vehicle body, drive assembly, detection assembly and control assembly. The drive assembly includes a drive member for driving a wheel to rotate relative to a vehicle body and a wheel. A plurality of detection assemblies is connected to the vehicle body, a portion of the detection assemblies (ultrasonic sensors) are used to detect obstacles around the vehicle body, a portion of the detection assemblies (infrared sensors) are used to detect obstacles at the lower end of the vehicle body, and a portion of the detection assemblies (cameras) are used to detect road sign information. The control assembly is connected to a drive member and a detection assembly to receive a signal from the detection assembly and control the drive member to drive a wheel to turn when the detection assembly detects an obstacle to avoid the obstacle.

6.1. Sensor Ranging Experiment

Figure 10 is an ultrasonic sensor. The HC-SR04 ultrasonic ranging module can provide 2 cm–450 cm non-contact ranging function, ranging accuracy up to 3 mm; the module includes an ultrasonic transmitter, receiver and control circuit.
The ultrasonic module has four pins:, Trig (control end), Echo (receiving end), GND; VCC and GND are connected to 5 V power supply, Trig (control end) controls the ultrasonic signal sent, and Echo (receiving end) receives the reflected ultrasonic signal.
Figure 11 is the principle of ultrasonic sensor ranging. The ultrasonic sensor ranging is based on the reflection characteristics of the ultrasonic sensor. The transmitter end of the ultrasonic sensor emits a beam of ultrasonic wave, and at the same time it starts timing, and the ultrasonic wave is transmitted in the medium at the same time. Because sound waves have reflective properties, they bounce back when they encounter obstacles. When the receiving end of the ultrasonic sensor receives the reflected ultrasonic wave back, it stops the timing. The propagation medium in this study is air, and the propagation speed of sound in air is 340 m/s. According to the recorded time t, the distance S between the launching position and the obstacle can be calculated according to the formula S = 340 t/2.
Figure 12 is a sequence diagram of an ultrasonic sensor. As shown in Figure 13, a pulse trigger signal of more than 10 needs to be provided, for 8 cycle levels of 40 kHz to be emitted inside the module and the echo to be detected. Once the echo signal is detected, the echo signal is output. The pulse width of the echo signal is proportional to the measured distance. The formula for calculating distance can be obtained from the time interval between the transmitting signal and receiving echo signal: distance = high level time * sound speed/2. In order to avoid the influence of the transmitting signal on a recall signal, the measurement period is above 60 ms.
An ultrasonic sensor was used to measure the value of distance from the object. The test distance and actual distance of ultrasonic sensor obtained through multiple experiments are shown in Table 3.
In order to improve the accuracy of the distance value measured by the sensor, use MATLAB to fit the curve of each distance value in the table with the least square method, and the fitting curve is y = a x + b , where a = 1.0171 , b = 0.262 , as shown in Figure 13. In Figure 13 the longitudinal axis shows the measured value, and the horizontal axis shows the actual values, the unit is cm. Table 4 shows the actual value and the value after fitting. It can be seen from the table that the error is very small

6.2. Road Sign Detection Experiment

Figure 14 shows the experimental effect diagram of detecting the actual road sign using the camera and the model after training. Figure 15 shows the actual environment of the detection experiment, and the background of the actual detection environment is not pure color, which can meet the requirements of the robot. In Figure 16, from left to right, and from top to bottom, the test distances are 10 cm, 20 cm, 30 cm, 40 cm, 50 cm, 60 cm, 70 cm, 80 cm, 90 cm, and 100 cm. In this study, experiments were conducted on road signs at different distances. The test results show that in the detection environment where the camera is 20 cm away from the road sign and 25 cm away from the road sign, the road sign cannot all be in the field of view due to the influence of the camera parameters, it is only partially in the field of view. Table 5 and Table 6 are the experimental data of road sign recognition. From these two tables, it can be concluded that the success rate of recognition is lower when the distance between the camera and the road sign is less than 25 cm. The success rate of recognition is higher when the distance between the camera and the road sign is greater than or equal to 30 cm, reaching an average of 99.625%, which can meet the requirements of accurate obstacle avoidance.

6.3. Physical Test

Figure 16 represents a neurons intelligent PID motor drive module with a built-in controller capable of PID computation, trapezoidal control, and dc motor movement driven by a drive circuit on the circuit board. Through the serial port it can send 8 B of command to control the positive and negative movement of the dual motor. Figure 17 is the motor speed curve after PID adjustment. It can be observed from the figure that when the speed of the motor increases from 0 to the maximum speed suddenly, the overshoot is very small and the curve reaches dynamic equilibrium in a very short time.
Figure 18 shows the artificially constructed experimental environment, including the environment of obstacles and road signs. The obstacle avoidance experiment of the physical prototype was conducted in the constructed experimental environment, and the experimental results are shown in Figure 19 The experimental results show that this method can successfully identify road signs and realize autonomous obstacle avoidance in complex environments. Figure 20 shows the real-time picture of road sign detection (not the experimental environment as shown in Figure 19).

7. Conclusions

In the physical prototype experiment, the mobile robot can pass through the narrow gap between obstacles stably and safely, and can run correctly according to the direction indicated by the road signs and finally reach the given destination position. The experimental results verify the feasibility of the design, the accuracy of the road sign detection and obstacle avoidance. The method for information fusion of multiple sensors can not only make up for the error generated by a single sensor, but also sense the information of multi-directional and multi-type obstacles of the robot at this moment and realize the obstacle avoidance function. Therefore, it can be widely used in mobile robot systems.

Author Contributions

Conceptualization, J.Z., J.F. and S.W.; methodology, J.F. and S.W.; software, J.F.; validation, J.F., C.L. and K.W.; formal analysis, J.F.; investigation, K.W., S.W.; resources, T.H.; data curation, J.F. and C.L.; writing—original draft preparation, T.H. and S.W.; writing—review and editing, J.Z. and T.H.; visualization, J.F.; supervision, J.Z. and T.H.; project administration, J.Z., S.W. and K.W.; funding acquisition, K.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by the National Social Science Foundation of China under Grant BIA200191.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data are available upon request.

Acknowledgments

This study is supported by the National Social Science Foundation of China (Grant/Award Numbers: BIA200191). We also thank the editors and reviewers.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liu, C.; Tomizuka, M. Real time trajectory optimization for nonlinear robotic systems: Relaxation and convexification. Syst. Control. Lett. 2017, 108, 56–63. [Google Scholar] [CrossRef]
  2. Das, P.; Behera, H.; Panigrahi, B. A hybridization of an improved particle swarm optimization and gravitational search algorithm for multi-robot path planning. Swarm Evol. Comput. 2016, 28, 14–28. [Google Scholar] [CrossRef]
  3. Zhao, J.; Gao, J.; Zhao, F.; Liu, Y. A search-and-rescue robot system for remotely sensing the underground coal mine envi-ronment. Sensors 2017, 17, 2426. [Google Scholar] [CrossRef] [Green Version]
  4. Milioto, A.; Lottes, P.; Stachniss, C. Real-time semantic segmentation of crop and weed for precision agriculture robots lev-eraging background knowledge in CNNs. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 2229–2235. [Google Scholar]
  5. Goodin, C.; Carrillo, J.T.; Mcinnis, D.P.; Cummins, C.L.; Durst, P.J.; Gates, B.Q.; Newell, B.S. Unmanned ground vehicle sim-ulation with the virtual autonomous navigation environment. In Proceedings of the 2017 International Conference on Military Technologies (ICMT), Brno, Czech Republic, 31 May–2 June 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 160–165. [Google Scholar]
  6. Peterson, J.; Li, W.; Cesar-Tondreau, B.; Bird, J.; Kochersberger, K.; Czaja, W.; McLean, M. Experiments in unmanned aerial vehicle/unmanned ground vehicle radiation search. J. Field Robot. 2019, 36, 818–845. [Google Scholar] [CrossRef]
  7. Rivera, Z.B.; de Simone, M.C.; Guida, D. Unmanned ground vehicle modelling in gazebo/ROS-Based environments. Machines 2019, 7, 42. [Google Scholar] [CrossRef] [Green Version]
  8. Galati, R.; Reina, G. Terrain Awareness Using a Tracked Skid-Steering Vehicle With Passive Independent Suspensions. Front. Robot. AI 2019, 6, 46. [Google Scholar] [CrossRef]
  9. Dogru, S.; Marques, L. Power Characterization of a Skid-Steered Mobile Field Robot with an Application to Headland Turn Optimization. J. Intell. Robot. Syst. 2018, 93, 601–615. [Google Scholar] [CrossRef]
  10. Figueras, A.; Esteva, S.; Cufí, X.; De La Rosa, J. Applying AI to the motion control in robots. A sliding situation. IFAC-PapersOnLine 2019, 52, 393–396. [Google Scholar] [CrossRef]
  11. Almeida, J.; Santos, V.M. Real time egomotion of a nonholonomic vehicle using LIDAR measurements. J. Field Robot. 2012, 30, 129–141. [Google Scholar] [CrossRef]
  12. Kim, C.; Ashfaq, A.M.; Kim, S.; Back, S.; Kim, Y.; Hwang, S.; Jang, J.; Han, C. Motion control of a 6WD/6WS wheeled plat-form with in-wheel motors to improve its maneuverability. Int. J. Control. Autom. Syst. 2015, 13, 434–442. [Google Scholar] [CrossRef]
  13. Reinstein, M.; Kubelka, V.; Zimmermann, K. Terrain adaptive odometry for mobile skid-steer robots. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation (ICRA), Karlsruhe, Germany, 6–10 May 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 4706–4711. [Google Scholar]
  14. Petriu, E. Automated guided vehicle with absolute encoded guide-path. IEEE Trans. Robot. Autom. 1991, 7, 562–565. [Google Scholar] [CrossRef]
  15. Jiang, K.; Seneviratne, L.D. A sensor guided autonomous parking system for nonholonomic mobile robots. In Proceedings of the 1999 IEEE International Conference on Robotics and Automation (Cat. No.99CH36288C), Detroit, MI, USA, 10–15 May 1999. [Google Scholar] [CrossRef]
  16. Ando, Y.; Yuta, S. Following a wall by an autonomous mobile robot with a sonar-ring. In Proceedings of the 1995 IEEE International Conference on Robotics and Automation, Nagoya, Japan, 21–27 May 1995. [Google Scholar] [CrossRef]
  17. Han, Y.; Hahn, H. Localization and classification of target surfaces using two pairs of ultrasonic sensors. Robot. Auton. Syst. 2003, 33, 31–41. [Google Scholar] [CrossRef]
  18. Silver, D.; Morales, D.; Rekleitis, I.; Lisien, B.; Choset, H. Arc carving: Obtaining accurate, low latency maps from ultrasonic range sensors. In Proceedings of the IEEE International Conference on Robotics and Automation, New Orleans, LA, USA, 26 April–1 May 2004; Volume 2, pp. 1554–1561. [Google Scholar] [CrossRef]
  19. Mu, Y.; Yan, S.; Liu, Y.; Huang, T.; Zhou, B. Discriminative local binary patterns for human detection in personal album. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 1–8. [Google Scholar]
  20. Zhou, S.; Liu, Q.; Guo, J.; Jiang, Y. ROI-HOG and LBP Based Human Detection via Shape Part-Templates Matching Procs. Lect. Notes Comput. Sci. 2012, 7667, 109–115. [Google Scholar]
  21. Bar-Hillel, A.; Levi, D.; Krupka, E.; Goldberg, C. Part-Based Feature Synthesis for Human Detection. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2010; Volume 6314, pp. 127–142. [Google Scholar] [CrossRef] [Green Version]
  22. Walk, S.; Schindler, K.; Schiele, B. Disparity Statistics for Pedestrian Detection: Combining Appearance, Motion and Stereo; Springer: Berlin/Heidelberg, Germany, 2010; pp. 182–195. [Google Scholar] [CrossRef]
  23. Liu, Y.; Shan, S.; Chen, X.; Heikkilä, J.; Gao, W.; Pietikäinen, M. Spatial-Temporal Granularity-Tunable Gradients Partition (STGGP) Descriptors for Human Detection; Springer: Berlin/Heidelberg, Germany, 2010; Volume 6311, pp. 327–340. [Google Scholar] [CrossRef]
  24. Geronimo, D.; Sappa, A.D.; Ponsa, D. Computer vision and Image Understanding (Special Issue on Intelligent Vision Systems). Comput. Vis. Image Underst. 2010, 114, 583–595. [Google Scholar]
  25. Cho, H.; Rybski, P.E.; Bar-Hillel, A.; Zhang, W. Real-Time Pedestrian Detection with Deformable Part Models; Springer: Berlin/Heidelberg, Germany, 2012; pp. 1035–1042. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Kinematic model.
Figure 1. Kinematic model.
Sensors 21 06777 g001
Figure 2. Composition of mobile robot system.
Figure 2. Composition of mobile robot system.
Sensors 21 06777 g002
Figure 3. Sensor Layout.
Figure 3. Sensor Layout.
Sensors 21 06777 g003
Figure 4. Positive sample picture.
Figure 4. Positive sample picture.
Sensors 21 06777 g004
Figure 5. Negative Sample Picture.
Figure 5. Negative Sample Picture.
Sensors 21 06777 g005
Figure 6. Identification flow chart.
Figure 6. Identification flow chart.
Sensors 21 06777 g006
Figure 7. Schematic diagram of obstacle, (a) is the raised obstacle on the ground; (b) is a sunken obstacle on the ground.
Figure 7. Schematic diagram of obstacle, (a) is the raised obstacle on the ground; (b) is a sunken obstacle on the ground.
Sensors 21 06777 g007
Figure 8. Block diagram of obstacle avoidance program.
Figure 8. Block diagram of obstacle avoidance program.
Sensors 21 06777 g008
Figure 9. Physical prototype.
Figure 9. Physical prototype.
Sensors 21 06777 g009
Figure 10. Ultrasonic transducer.
Figure 10. Ultrasonic transducer.
Sensors 21 06777 g010
Figure 11. Ranging principle.
Figure 11. Ranging principle.
Sensors 21 06777 g011
Figure 12. Sequence diagram of ultrasonic sensor.
Figure 12. Sequence diagram of ultrasonic sensor.
Sensors 21 06777 g012
Figure 13. Curve after fitting.
Figure 13. Curve after fitting.
Sensors 21 06777 g013
Figure 14. Identification effect of road signs.
Figure 14. Identification effect of road signs.
Sensors 21 06777 g014
Figure 15. Road sign detection environment.
Figure 15. Road sign detection environment.
Sensors 21 06777 g015
Figure 16. Motor driver.
Figure 16. Motor driver.
Sensors 21 06777 g016
Figure 17. Motor speed curve.
Figure 17. Motor speed curve.
Sensors 21 06777 g017
Figure 18. Experimental Environment.
Figure 18. Experimental Environment.
Sensors 21 06777 g018
Figure 19. Physical prototype experiment.
Figure 19. Physical prototype experiment.
Sensors 21 06777 g019
Figure 20. Real-time picture of road sign detection.
Figure 20. Real-time picture of road sign detection.
Sensors 21 06777 g020
Table 1. Parameter list.
Table 1. Parameter list.
ParameterMeaning
r The wheel radius
b Distance between left and right wheel centroid
w l Left wheel speed
w r RPM of the right wheel
s l Slip rate of left wheel
s r The slip rate of the right wheel
α Sideslip Angle
φ ˙ The yaw velocity of rotation about the z axis in the XOY plane
v G x The longitudinal velocity of the center of mass
v G y The lateral velocity of the center of mass
Table 2. Notes to physical prototype.
Table 2. Notes to physical prototype.
Label1111112222313233
NameFirst mountMounting hole2Nd Mounting blockRear wheelFront wheelIR SensorUltrasonic sensorCamera
Table 3. Actual Distance and test distance.
Table 3. Actual Distance and test distance.
Actual Distance (y)/cmTest Distance (x)/cmActual Distance (y)/cmTest Distance (x)/cmActual Distance (y)/cmTest Distance (x)/cmActual Distance (y)/cmTest Distance (x)/cm
109.933534.036058.748583.29
1514.344038.986563.889088.22
2019.294544.317068.699593.17
2524.175048.937573.5210098.03
3029.125553.798078.14
Table 4. Actual distance and fitted distance.
Table 4. Actual distance and fitted distance.
Actual Distance
/cm
Fitting Distance
/cm
Actual Distance
/cm
Fitting Distance
/cm
Actual Distance
/cm
Fitting Distance
/cm
Actual Distance
/cm
Fitting Distance
/cm
1010.34483534.85456059.98468584.9519
1514.82984039.8876565.21209089.9657
2019.86394545.30937070.10379594.9999
2524.82695050.00787575.015810099.9425
3029.86105554.95048079.7144
Table 5. Experimental data of short distance road sign recognition.
Table 5. Experimental data of short distance road sign recognition.
Detection Distance
/cm
Number of InspectionsNumber of Successful TestsNumber of Errors DetectedDetection Success Rate (Number of Successes/Total Number of Failures)
20100633763%
25100802080%
Table 6. Experimental data of long distance road sign recognition.
Table 6. Experimental data of long distance road sign recognition.
Detection Distance
/cm
Number of InspectionsNumber of Successful TestsNumber of Errors DetectedDetection Success Rate (Number of Successes/Total Number of Failures)
301001000100%
4010099199%
501001000100%
6010099199%
701001000100%
8010099199%
901001000100%
1001001000100%
Average success rate of detection99.625%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhao, J.; Fang, J.; Wang, S.; Wang, K.; Liu, C.; Han, T. Obstacle Avoidance of Multi-Sensor Intelligent Robot Based on Road Sign Detection. Sensors 2021, 21, 6777. https://doi.org/10.3390/s21206777

AMA Style

Zhao J, Fang J, Wang S, Wang K, Liu C, Han T. Obstacle Avoidance of Multi-Sensor Intelligent Robot Based on Road Sign Detection. Sensors. 2021; 21(20):6777. https://doi.org/10.3390/s21206777

Chicago/Turabian Style

Zhao, Jianwei, Jianhua Fang, Shouzhong Wang, Kun Wang, Chengxiang Liu, and Tao Han. 2021. "Obstacle Avoidance of Multi-Sensor Intelligent Robot Based on Road Sign Detection" Sensors 21, no. 20: 6777. https://doi.org/10.3390/s21206777

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop