Next Article in Journal
Relative Jitter Measurement Methodology and Comparison of Clocking Resources Jitter in Artix 7 FPGA
Next Article in Special Issue
Pursuit Path Planning for Multiple Unmanned Ground Vehicles Based on Deep Reinforcement Learning
Previous Article in Journal
Overview on Battery Charging Systems for Electric Vehicles
Previous Article in Special Issue
Research Progress of Nature-Inspired Metaheuristic Algorithms in Mobile Robot Path Planning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Obstacle Avoidance for Automated Guided Vehicles in Real-World Workshops Using the Grid Method and Deep Learning

1
Longyan Tobacco Industry Co., Ltd., Longyan 364021, China
2
Department of Electronics Engineering, Faculty of Electrical and Electronics Engineering, Kaunas University of Technology, 44249 Kaunas, Lithuania
3
Department of Manufacturing Engineering and Automation Products, Opole University of Technology, 45-758 Opole, Poland
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(20), 4296; https://doi.org/10.3390/electronics12204296
Submission received: 7 September 2023 / Revised: 9 October 2023 / Accepted: 10 October 2023 / Published: 17 October 2023
(This article belongs to the Special Issue Autonomous Vehicles: Path Planning and Navigation)

Abstract

:
An automated guided vehicle (AGV) obstacle avoidance system based on the grid method and deep learning algorithm is proposed, aiming at the complex and dynamic environment in the industrial workshop of a tobacco company. The deep learning object detection is used to detect obstacles in real-time for the AGV, and feasible paths are generated by the grid method, which ultimately finds an AGV obstacle avoidance solution in complex dynamic environments. The experimental results showed that the proposed system can effectively identify and avoid obstacles in a simulated tobacco production workshop environment, resulting in the average obstacle avoidance success rate of 98.67%. The transportation efficiency of cigarette factories is significantly improved with the proposed system, reducing the average execution time of handing tasks by 27.29%. This paper expects to provide a reliable and efficient solution for AGV obstacle avoidance in real-world industrial workshops.

1. Introduction

With the rapid development of modern manufacturing, automation technology is used widely in more and more scenarios. Due to the complex and dynamic environmental conditions inside cigarette factories, mechanical transportation and manual operation are no longer sufficient to meet production needs. AGVs can be used in cigarette factories in order to improve efficiency and quality, as they can run autonomously according to the program and transfer items back and forth during cigarette production. However, there are often many obstacles that hinder the movement of AGVs within cigarette factories, such as pillars, packaging boxes, or other production equipment. The operation of AGVs may be prevented by these obstacles, as they may collide with other equipment or employees in severe cases, leading to serious accidents and even the delay of production. Therefore, it is necessary to study an efficient AGV obstacle avoidance system based on complex and dynamic background conditions in the tobacco industry.
AGV obstacle avoidance technologies have been explored and developed in different ways. Gonzalez et al. [1] introduced the obstacle avoidance techniques implemented in the intelligent vehicle literature and discussed their contributions to provide comprehensive references for studies in relevant fields. At present, obstacle avoidance based on sensors is the most commonly used technique. The objects around AGVs can be detected by sensors, which determine whether to avoid them. Information such as object position, shape, and distance is detected by the sensors, like laser rangefinder, ultrasonic sensor, infrared sensor, and visual sensor, which help AGVs avoid obstacles. Collision avoidance is considered a key issue in mobile robot navigation. The long lines from 2D laser scanning were extracted by Toan Van Nguyen et al. [2] to detect static obstacles. Furthermore, the Kalman filter and global nearest neighbor (GNN) method were combined to track the position and velocity of dynamic obstacles, and the spatial information of the obstacles was obtained. The speed estimation method based on polynomial fitting was used by Zheng et al. [3] in order to ensure the flight safety of unmanned aerial vehicles. The positions of each point scanned by LiDAR were estimated, and the distorted point clouds were corrected. The clustering algorithm based on relative distance and density (CBRDD) was used to cluster the non-uniform density point clouds to obtain the position information of obstacles. The STM32F207 was selected by Zhou et al. [4] as the main control chip, using peripheral devices such as grayscale sensors, ultrasound, infrared sensors, cameras, wireless modules, etc., in conjunction with the μC/OS-II real-time operating system to complete a series of studies such as AGV position data acquisition, tracking, trajectory tracking, voltage acquisition and conversion, serial port 232 communication, state data storage, I/O input and output control, as well as AGV modeling and trajectory tracking. The machine vision based on the identification method for rice field stems was proposed by Liu et al. [5] to address the identification of turning positions in visual navigation of rice planting machines. First of all, the distortion parameters were obtained through camera calibration, and the original image was corrected, and then the deviation of the average grayscale value of the image line was calculated by Python to determine whether obstacles appear. The binary and morphological processing on the obstacle image were performed, and the image was scanned in the height direction to obtain the feature points for fitting the obstacle boundary line. Finally, the least squares method was used to fit the feature points and the obstacle boundary line was obtained. The effect of using a single sensor for measurement is not ideal in the field of sensor obstacle avoidance; therefore, it is often necessary to use other sensors for compensation in order to achieve the best detection of the surrounding environment in practical applications. The motion model for differential AGV navigation adopting an inertial guidance method based on multiple sensors was established by Wang et al. [6]. The Kalman filtering multi-sensor was established by selecting encoders, gyroscopes, acceleration sensors, ultrasonic sensors, and infrared sensors, and a data fusion navigation and obstacle avoidance model and algorithm were proposed.
The preset paths were used by the obstacle avoidance technology of path planning based on environmental simulation to avoid obstacles. The use of sensors can be reduced and the motion efficiency is improved by this method, but accurate environmental modeling and path planning are required; otherwise, AGVs may collide with obstacles. The improved artificial potential field method for optimizing obstacle avoidance path planning of robotic arms in three-dimensional space was proposed by Xu et al. [7]. The target can leave the local minimum point where the algorithm falls, while avoiding obstacles and following a shorter feasible path along the repulsive equipotential surface for local optimization. A new method to plan the spraying path for multiple irregular and discontinuous working regional planning was proposed by Ma et al. [8]. An improved grid method is adopted to reduce the transition work path of irregular work areas on the basis of traditional grid methods. The shortest total flight path between multiple discontinuous regions is designed by using an improved ant colony algorithm. A path planning method based on the improved A* algorithm was proposed by Wang et al. [9] regarding the impact of scenic road conditions on route and road cost issues. The optimal scenic area route can be planned by comprehensively evaluating the weight of each extension node in the grid scenic area. Firstly, the heuristic function of the A* algorithm is exponentially attenuated and weighted to improve the computational efficiency of the algorithm. Secondly, factors affecting road conditions are introduced into the evaluation function in order to increase the practicality of the A* algorithm. Finally, the improved A* algorithm can effectively reduce computational time and road costs. The AGV path planning problem of the single line production line in the workshop was studied by Tao et al. [10]. A mathematical model with the shortest transportation time as the objective function is established, and an improved particle swarm optimization (IPSO) algorithm is proposed to obtain the optimal path. In order to solve path planning problems, the researchers proposed an encoding method based on this algorithm, designed a crossover operation to update particle positions, and adopted a mutation mechanism to avoid the algorithm falling into local optimal positions. Huang et al. [11] used a novel motion planning and tracking framework for automated vehicles, which relied on the artificial potential field elaborated resistance network approach. They provided several case studies to show the effectiveness of the proposed framework.
With the development of artificial intelligence technology, more researchers have applied it to the motion plan of automated vehicles [12,13]. For example, Akopov et al. proposed a novel parallel real-coded genetic algorithm and combined it with fuzzy clustering to approximate the optimal parameters of the multiagent fuzzy transportation system in the reference [14,15]. They demonstrated that the proposed method effectively improves the maneuverability of automated vehicles. Deep learning uses its feature-learning ability and accelerated training technique to recognize and analyze objects in the environment, which helps AGVs avoid obstacles [16]. Target detection algorithms are widely used in AGV obstacle avoidance systems, such as YOLO, Faster R-CNN, and SSD. The objects can be detected and recognized by these algorithms in real-time, providing efficient and accurate obstacle avoidance solutions for AGVs. The sonar detection of image targets based on deep learning technology was studied by Yu [17], and corresponding algorithms for improvement and functional implementation were designed in response to the current shortcomings and needs in this field. Various methods for acoustic image denoising based on multi resolution tools was proposed firstly, the natural image was divided into blocks at an appropriate rate according to the changes in the sampling matrix, and underwater natural images were measured finally. The neural networks and convolutional neural network algorithms in deep learning were studied by Su et al. [18], and an object detection system based on these two algorithms was constructed to test the treated sewage. On the basis of YOLOv3 detection network, the improved network retaining the original basic features of the detection network was optimized and validated by Xiao et al. [19] in the context of intestinal endoscopy. By comparing the hash values of the images, the redundant images are filtered out, and the final concise detection results are presented to the doctors. A smart city object detection algorithm combining deep learning and feature extraction for the phenomenon of occlusion in smart cities was proposed by Wang et al. [20]. An adaptive strategy is proposed, which optimizes the search window of the algorithm based on the traditional SSD algorithm. The weighted correlation feature fusion method was combined with the algorithm according to the changes of the target working conditions which improves the accuracy of the objective function. A deep learning-based human track and field object detection and tracking method was proposed by Zhang et al. [21]. The background subtraction based on an adaptive mixed Gaussian background model was used to detect targets. The video image was denoised and smoothed, and the holes in the foreground area were removed by morphological filtering. The connected areas of the binary image were analyzed to obtain the number and location of the connected areas. The area ratio and length-width ratio of the human body were used to classify and identify human bodies, so as to complete the detection of human track and field sport targets. Based on the structure of deep learning, the method was combined with the detection results of deep learning and LK tracking. PN learning was used to modify the parameters of the superimposed automatic coding machine, which avoids detection errors in deep learning, and finally realized the track and field tracking of humans. The deep learning object detection algorithm is widely used in underwater image detection, medical image detection, smart city object detection, and other fields, and has significant advantages in universality and applicability.
To sum up, to address the complex and dynamic environment problem inside the cigarette factory, an AGV obstacle avoidance system combining the grid method and deep learning algorithm is proposed in this paper. Deep learning is used as an object detection algorithm to detect obstacles in real time for AGVs. The grid method is suitable for many irregular and discontinuous working areas, as it can deal with the scene containing many obstacles and many restrictions properly.
The structure of this article is as follows: the background and research status of this article is introduced in Section 1; the proposed AGV obstacle avoidance system is introduced in Section 2; the on-site test results are described in Section 3; and the conclusion is drawn in Section 4.

2. Materials and Methods

2.1. Working Principle of AGV Obstacle Avoidance System

The AGV obstacle avoidance system in this paper is mainly composed of a camera, a visual AI platform, a demonstration system, an AGV scheduling system, and an AGV car controller. The deep learning object detection algorithm was used by the system to detect obstacles. At the same time, a grid-based path planning method was used to generate a feasible path suitable for the AGV, and the car movement was controlled by dispatching system to avoid obstacles. The AGV obstacle avoidance application architecture and data flow are shown in Figure 1. The detailed flow is as follows:
(1)
Video images of the AGV passage in the tobacco production workshop are obtained through cameras.
(2)
The object detection algorithm is used to analyze whether there are obstacles in the image after the image is obtained on the visual AI basic platform, and the obstacles are pushed to the AGV scheduling system.
(3)
The grid method is used to plan the route after obtaining obstacle information, and the optimal route is pushed to the AGV.
(4)
The real-time position information of the AGV is pushed to the control module on the AGV through the RS232 interface. The corresponding operation according to the control instructions is performed by the AGV; the information is sent to the basic platform through the 5G network for real-time position and video display of the AGV.
(a)
Camera module:
Image acquisition device (model: DS-2CD2646FWDA2-XZS2.7—12 mm): 2 million 1/2.7 “CMOS black light level cylindrical network camera; minimum illumination: 0.0005 Lux @ (F1.0, AGC ON), 0 Lux with IR, wide dynamic range: 120 dB; maximum image size: 2560 × 1440, with 1 built-in microphone; minimum illumination color: 0.005 lx, black and white: 0.0005 lx; maximum brightness discrimination level (grayscale level) no less than 11 levels; support IP67 dustproof and waterproof.
(b)
Visual AI Basic Platform Module:
Computing power equipment (Huawei, Shenzhen, China, 2288XV5 12LFF): CPU: Xeon Silver 4208; graphics card: NVIDIA TESLA T4; memory: 128 G; hard disk: 2 TB; network: dual port 10 Gigabit Ethernet card.
(c)
AGV trolley module:
Brand: Rocla, Järvenpää, Finland; model: AWT16F S1400; guidance mode: laser; communication mode: 5g/WIFI; transfer mode: fork type; walking speed: 0~2 m/s; minimum turning radius: 1800 mm; repetition accuracy: ±5 mm; safety protection: safe laser obstacle avoidance and safety contact.
The autonomous movement obstacle avoidance of the AGV can be realized through the cooperation of different systems, such as the camera’s acquisition of scene information, the visual AI platform algorithm’s processing and visual display of the demonstration system, as well as the coordination of the AGV’s own scheduling system and controller.

2.2. Obstacle Detection

Deep learning object detection algorithms are used by the system to detect obstacles in real-time. With high accuracy, the object detection algorithms, such as YOLO, Faster R-CNN, and SSD, can detect and recognize objects in real-time. The system uses the YOLO algorithm to detect obstacles in the environment, as shown in Figure 2. The YOLO algorithm divides the environment into multiple grids and detects objects in each grid. The YOLO algorithm can detect multiple objects in real-time and provide accurate object information, such as position and size.
The YOLO (You Only Look Once) algorithm is an object detection algorithm based on convolutional neural networks. The YOLO algorithm has advantages such as fast speed, high accuracy, and end-to-end training, compared to traditional object detection algorithms such as R-CNN.
A single neural network is used by the YOLO algorithm to simultaneously predict the category and position information of multiple targets in the image. The prediction workflow is as follows:
Step 1. Place images on the network for processing.
Step 2. Two bounding boxes are received by each grid after dividing the environment into 49 grids.
Step 3. Multiply the class information predicted by each grid with the confidence information predicted by the bounding boxes to obtain the probability of each candidate box predicting a specific object and the probability of position overlap PrIOU. The value of the equal sign on the right side in Equation (1) is PrIOU.
Pr Class i | O b j e c t Pr ( O b j e c t ) I O U p r e d t r u t h = Pr ( Class i ) I O U p r e d t r u t h
Step 4. Sort PrIOUs, remove PrIOUs below the threshold, and then perform non-maximum suppression for each category.
The non-maximum suppression mentioned here refers to selecting the candidate box with the highest confidence; it will be deleted if the overlapping area IOU with the current highest scoring candidate box exceeds a certain threshold.

2.3. Path Planning

The feasible paths for AGVs can be generated by path planning based on the grid in the system. The layout of the cigarette factory is shown in Figure 3 (left). As shown in Figure 3, the yellow box represents the AGV path, the black box represents the packaging car room, and the red box represents the AGV platform within the packaging unit. The packaging unit is shown in Figure 3 (right). The AGV needs to carry out the transportation task according to the AGV path to the platform of the target site, if the unit is the target site. The entire rolling car room was divided into four areas with a total of 16 grids based on the factory environment in terms of the path planning. The obstacle avoidance area diagram is shown in Figure 4. The A* algorithm was used to generate feasible paths for the AGV at the same time, which is mainly used to generate the shortest path from the current position of the AGV to the target position, while also considering obstacles and constraints in the grid map, as shown in Figure 5.
With good performance and accuracy, the A* (A Star) algorithm is commonly used for path finding and graph traversal.
The priority of each node was calculated by the A* algorithm through the following Formula (2).
f n = g n + h n
where f(n) is the comprehensive priority of node n; the node with the highest comprehensive priority (minimum value) was always chosen when choosing the next node to traverse. g(n) is the cost of the distance between node n and the starting point; h(n) is the estimated cost of the distance from node n to the endpoint, which is the heuristic function of the A* algorithm.
The node with the lowest value of f(n) (the highest priority) from the priority queue was selected as the next node to be traversed each time during the operations by the A* algorithm.
In addition, two sets were used by the A* algorithm to represent the nodes to be traversed, and the nodes that have already been traversed, which are commonly referred to as open_ set and close_ set.

2.4. Obstacle Avoidance Control

The AGV scheduling system was used to adjust the speed and direction of the AGV. By communicating with the control module of the AGV, the scheduling system sends control instructions to the AGV, and controls parameters such as the movement and speed of the AGV. The AGV scheduling system can achieve the following aspects of control:
(1)
Motion control: the direction of AGV travel, including forward, backward, left turn, right turn, etc., can be controlled by the scheduling system in order to achieve path planning and task scheduling.
(2)
Speed control: the driving speed of AGVs can be controlled by the scheduling system to adjust according to task requirements, such as acceleration and deceleration.
(3)
Parking control: the parking and starting of AGVs can be controlled by the scheduling system to achieve tasks such as pausing and resuming.
(4)
Emergency stop control: emergency stop instructions are sent by the dispatch system to avoid accidents in a timely manner when an emergency situation occurs, such as obstacles, personnel, etc.
The remote and autonomous control of AGVs can be achieved by the scheduling system through the controls listed above in order to achieve logistics automation and intelligence, which improve production and logistics efficiency, and reduce labor and operational costs.

2.5. Rules of AGV Obstacle Avoidance System

2.5.1. Obstacle Discrimination Rules

The video obtained from the tobacco production workshop camera in this design is circulated to the visual AI basic platform for obstacle identification. The obstacle identification rules are as follows:
(1)
Filter out suspicious obstacles detected by deep learning and filter out pedestrian categories;
(2)
Confirm the type of obstacle and obtain the coordinates Oi = {oi1, oi2,…, oim} of the corresponding obstacle, where i represents the large area monitored by the camera. The rules are as follows:
If the obstacle category is AGV, it is directly determined as an obstacle.
If the obstacle category is other, calculate the obstacle’s residence time. It is determined as an obstacle if the residence time exceeds the threshold.

2.5.2. Obstacle Positioning Rules

An obstacle needs to be located to facilitate the transmission of the obstacle’s location information to the scheduling system for route planning when a suspicious object is identified as an obstacle. The rules for obstacle positioning are as follows:
(1)
Each camera monitors a large area, and each large area is preset with several small cells. The coordinates of the preset small areas are Ai = {aj1, aj2,…, ajn}, where i represents the large area monitored by the camera and j represents the preset number of small cells for each large area.
(2)
After confirming the category of obstacles, convert the coordinates Oi = {oi1, oi2,…, oim} of the corresponding obstacles to an area Si = {si1, si2,…, sim}.
(3)
Convert the coordinates Ai = {aj1, aj2,…, ajn} of several small regions to the region area Qi = {qj1, qj2,…, qjn}.
(4)
Calculate the area of obstacles and the area of several small areas to obtain the proportion of obstacles in each small area. The formula is as follows:
I im = s i m q i n s i m
(5)
Position the obstacle at its maximum proportion; the proportion is greater than the threshold α.

2.5.3. Regional Division and Obstacle Avoidance Routes

The AGV obstacle avoidance area of the tobacco production workshop was divided into several large areas with several lines using the grid method. The left and right sides are the main road, which are shared for both the up and down directions of the obstacle avoidance line in the middle. Each region is further subdivided into several small communities as obstacle avoidance areas. The obstacle avoidance route diagram is shown in Figure 6.

2.5.4. Obstacle Avoidance Rules

The corresponding obstacle avoidance rules were formulated by the AGV conditional automatic navigation obstacle avoidance mechanism for 8 actual production environments when encountering obstacles.
Obstacle avoidance rule 1: for the situation where there are obstacles on the main road and the target platform for AGV delivery is in the same horizontal area as the obstacles, the AGV continues to move towards the main road until the infrared module of the car triggers a stop and waits for the obstacles to be removed, as shown in Figure 7.
Obstacle avoidance rule 2: for situations where there are obstacles on the main road, but the target platform is in the area in front of the obstacle area, the car should follow the main road and deliver goods normally, as shown in Figure 8.
Obstacle avoidance rule 3: for the situation where there are obstacles on the main road and the target platform for AGV delivery is in the same area as the obstacles, the car follows the main route until the infrared module of the car triggers a stop and waits for the obstacles to be removed, as shown in Figure 9.
Obstacle avoidance rule 4: for the situation where there are obstacles on the main road and the target platform is behind the area where the obstacles are located, the car first turns into the obstacle avoidance route and then goes back to the main road, as shown in Figure 10.
Obstacle avoidance rule 5: for the situation that there are obstacles on both the main road and the obstacle avoidance path, and the obstacles are in the same horizontal area, the car continues on the main road until the infrared module of the car triggers a stop, and waits for the obstacles on the main road to be removed, as shown in Figure 11.
Obstacle avoidance rule 6: for the situation where there are obstacles on both the main road and the obstacle avoidance path, and the obstacles are not in the same horizontal area, the car continues to follow the obstacle avoidance route and then returns to the main route, as shown in Figure 12.
Obstacle avoidance rule 7: for situations where there are obstacles on both the main road and the obstacle avoidance path, and the obstacles are not in the same horizontal area, the car follows the main route, as shown in Figure 13.
Obstacle avoidance rule 8: for situations where there are obstacles on the main road and the platform is not currently in that area (obstacles are in A1 or A4, and the target platform is not in Area A), the car should follow the main route, as shown in Figure 14.

2.5.5. Rules for Avoidance of AGVs

The corresponding avoidance rules were established by the AGV conditional automatic navigation obstacle avoidance mechanism for actual production environments when encountering AGVs:
If both AGVs in the upper and lower directions of the same area enter the obstacle avoidance line, the AGV on the obstacle avoidance line enters the obstacle avoidance line after the AGV in the obstacle avoidance line leaves, as shown in Figure 15.

3. Experiment Evaluation and Results

3.1. Introduction of Test Conditions

Experiments were conducted to evaluate the performance of the proposed AGV obstacle avoidance system. A tobacco production workshop environment was used to simulate the movement and obstacle avoidance of AGVs. Various obstacles were included in the environment, such as boxes, machines, and other AGVs.
As shown in Figure 16, in this study, the AGV obstacle avoidance area was divided into four major areas, A, B, C, and D, with a total of three lines. The left and right sides are the main road, which are shared for both the up and down directions of the obstacle avoidance line in the middle. Each major region is further subdivided into six small communities, with a total of 24 obstacle avoidance point areas. The video streams are transmitted to the AI vision platform by the camera in the corresponding region, and the obstacle information is transmitted to the AGV dispatching system through the obstacle discrimination rules and positioning rules. The route is naturalized by obstacle avoidance rules and AGV avoidance rules, and the obstacle information is transmitted to the demonstration screen for display.

3.2. Comparison of Test Data

The comparison results before and after the transformation are shown in Table 1, Table 2 and Table 3.
The average total number of tasks was 385, and the average execution time of AGV’s handling tasks was 10.96 min before the renovation, as shown in Table 1. The average total number of tasks was 397, the execution time was reduced to 8.61 min, and the efficiency was improved by 27.29% after the renovation. The frequency of the AGV being blocked by obstacles was about 9.69% of the total number of tasks, and the average obstacle avoidance success rate of the AGV reached 98.67%, as shown in Table 2. The results indicate that the handling efficiency of the AGV has been significantly improved.
It should be noted that the average number of tasks after the renovation did not significantly increase due to limitations in the workshop production process, which means that the total number of tasks is basically fixed under the established production task conditions. As can be seen from the 10-day data tables before and after the transformation, there are significant differences between the maximum average number of tasks and the minimum average number of tasks, but the difference before and after the transformation is not large, which means that the process difference was caused by production, rather than the AGV obstacle avoidance system or the ability of the AGV system to improve.

4. Conclusions

An AGV obstacle avoidance system was proposed based on the grid method and deep learning algorithm for the complex and dynamic environment of tobacco production workshops. The path planning and object detection algorithms were adopted by the system based on the grid method to provide real-time obstacle avoidance solutions for AGVs in complex and dynamic environments. The performance of the proposed system was tested in a simulated tobacco production workshop environment. The final experimental results are as follows:
(1)
The proposed AGV obstacle avoidance system can improve the handling efficiency of the AGV. The average execution time of the AGV’s handling tasks was 10.96 min before the renovation. The average execution time was reduced to 8.61 min after the renovation. By comparison, it can be found that the system improves handling efficiency by 27.29%.
(2)
The proposed AGV obstacle avoidance system can effectively identify and avoid obstacles. The frequency of the AGV being blocked by obstacles accounted for 9.69% of the total number of tasks, and the average obstacle avoidance success rate of the AGV could reach 98.67%.
In future work, the AGV obstacle avoidance system will be used to execute handling tasks in different tobacco production workshops and explore the sensitivity of performance metrics to values of obstacle parameters. Moreover, automated warehousing systems, automated manufacturing systems, and off-site transportation systems will be integrated to make connections tighter between each system, so as to further improve the work efficiency of cigarette factories.

Author Contributions

Conceptualization, T.G. and Z.L.; methodology, X.L.; software, W.R.; validation, X.L. and T.G.; formal analysis, X.L.; investigation, D.A.; resources, D.A.; data curation, J.G.; writing—original draft preparation, X.L., W.R. and Z.L.; writing—review and editing, D.A.; visualization, J.G.; supervision, W.R.; project administration, D.L.; funding acquisition, J.G, T.G. and D.L. All authors have read and agreed to the published version of the manuscript.

Funding

The research leading to these results has received funding from the Norwegian Financial Mechanism 2014–2021 under Project Contract No 2020/37/K/ST8/02748.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. González, D.; Pérez, J.; Milanés, V.; Nashashibi, F. A Review of Motion Planning Techniques for Automated Vehicles. IEEE Trans. Intell. Transp. Syst. 2016, 17, 1135–1145. [Google Scholar] [CrossRef]
  2. Van Nguyen, T.; Do, M.H.; Jo, J. MoDeT: A low-cost obstacle tracker for self-driving mobile robot navigation using 2D-laser scan. Ind. Robot. Int. J. Robot. Res. Appl. 2022, 49, 1032–1041. [Google Scholar] [CrossRef]
  3. Zheng, L.; Zhang, P.; Tan, J.; Li, F. The obstacle detection method of uav based on 2D lidar. IEEE Access 2019, 7, 163437–163448. [Google Scholar] [CrossRef]
  4. Zhou, X.; Chen, T.; Zhang, Y. Research on intelligent AGV control system. In Proceedings of the 2018 Chinese Automation Congress (CAC), Xi’an, China, 30 November–2 December 2018; pp. 58–61. [Google Scholar]
  5. Liu, H.; Zhang, Y.; Yao, Y.; Yang, C. Method for Detecting Obstacles of Riceplanter Based on Machine Vision. In Proceedings of the 2020 IEEE 11th International Conference on Software Engineering and Service Science (ICSESS), Beijing, China, 16–18 October 2020; pp. 228–231. [Google Scholar]
  6. Wang, T.; Tong, C.; Xu, B. AGV navigation analysis based on multi-sensor data fusion. Multimed. Tools Appl. 2020, 79, 5109–5124. [Google Scholar] [CrossRef]
  7. Xu, T.; Zhou, H.; Tan, S.; Li, Z.; Ju, X.; Peng, Y. Mechanical arm obstacle avoidance path planning based on improved artificial potential field method. Ind. Robot. Int. J. Robot. Res. Appl. 2022, 49, 271–279. [Google Scholar] [CrossRef]
  8. Ma, F.; Xiong, F. Research on path planning of plant protection UAV based on grid method and improved ant colony algorithm. IOP Conf. Ser. Mater. Sci. Eng. 2019, 612, 052053. [Google Scholar] [CrossRef]
  9. Wang, X.; Zhang, H.; Liu, S.; Wang, J.; Wang, Y.; Shangguan, D. Path planning of scenic spots based on improved A* algorithm. Sci. Rep. 2022, 12, 1320. [Google Scholar] [CrossRef] [PubMed]
  10. Tao, Q.; Sang, H.; Guo, H.; Wang, P. Improved particle swarm optimization algorithm for AGV path planning. IEEE Access 2021, 9, 33522–33531. [Google Scholar]
  11. Huang, Y.; Ding, H.; Zhang, Y.; Wang, H.; Cao, D.; Xu, N.; Hu, C. A Motion Planning and Tracking Framework for Autonomous Vehicles Based on Artificial Potential Field Elaborated Resistance Network Approach. IEEE Trans. Ind. Electron. 2020, 67, 1376–1386. [Google Scholar] [CrossRef]
  12. Li, L.; Li, J.; Zhang, S. Review article: State-of-the-art trajectory tracking of autonomous vehicles. Mech. Sci. 2021, 12, 419–432. [Google Scholar] [CrossRef]
  13. Veitch, E.; Alsos, O. A systematic review of human-AI interaction in autonomous ship system. Saf. Sci. 2022, 152, 105778. [Google Scholar] [CrossRef]
  14. Fraga-Lamas, P.; Ramos, L.; Mondéjar-Guerra, V.; Fernández-Caramés, T.M. A Review on IoT Deep Learning UAV Systems for Autonomous Obstacle Detection and Collision Avoidance. Remote Sens. 2019, 11, 2144. [Google Scholar] [CrossRef]
  15. Akopov, A.; Beklaryan, L.; Beklaryan, A. Cluster-Based Optimization of an Evacuation Process Using a Parallel Bi-Objective Real-Coded Genetic Algorithm. Cybern. Inf. Technol. 2020, 20, 45–63. [Google Scholar] [CrossRef]
  16. Akopov, A.; Beklaryan, L.; Thakur, M. Improvement of Maneuverability Within a Multiagent Fuzzy Transportation System with the Use of Parallel Biobjective Real-Coded Genetic Algorithm. IEEE Trans. Intell. Transp. Syst. 2022, 23, 12648–12664. [Google Scholar] [CrossRef]
  17. Yu, S. Sonar image target detection based on deep learning. Math. Probl. Eng. 2022, 2022, 3359871. [Google Scholar] [CrossRef]
  18. Su, B.; Lin, Y.; Wang, J.; Quan, X.; Chang, Z.; Rui, C. Deep Learning Target Detection System for Sewage Treatment. Comput. Intell. Neurosci. 2022, 2022, 2743781. [Google Scholar] [CrossRef] [PubMed]
  19. Xiao, Z.; Feng, L.N. A study on wireless capsule endoscopy for small intestinal lesions detection based on deep learning target detection. IEEE Access 2020, 8, 159017–159026. [Google Scholar] [CrossRef]
  20. Wang, F.; Xu, Z.; Qiu, Z.; Ni, W.; Li, J.; Luo, Y. Algorithm for target detection in smart city combined with depth learning and feature extraction. Wirel. Commun. Mob. Comput. 2020, 2020, 8885670. [Google Scholar] [CrossRef]
  21. Zhang, Y.; Zhang, M.; Cui, Y.; Zhang, D. Detection and tracking of human track and field motion targets based on deep learning. Multimed. Tools Appl. 2020, 79, 9543–9563. [Google Scholar] [CrossRef]
Figure 1. Architecture of AGV obstacle avoidance system.
Figure 1. Architecture of AGV obstacle avoidance system.
Electronics 12 04296 g001
Figure 2. Detection results of YOLO algorithm.
Figure 2. Detection results of YOLO algorithm.
Electronics 12 04296 g002
Figure 3. AGV implementation diagram of the cigarette factory (left: the layout of the cigarette factory; right: the unit method diagram).
Figure 3. AGV implementation diagram of the cigarette factory (left: the layout of the cigarette factory; right: the unit method diagram).
Electronics 12 04296 g003
Figure 4. Diagram of the obstacle avoidance area.
Figure 4. Diagram of the obstacle avoidance area.
Electronics 12 04296 g004
Figure 5. Path planning of the A* algorithm.
Figure 5. Path planning of the A* algorithm.
Electronics 12 04296 g005
Figure 6. The obstacle avoidance route diagram.
Figure 6. The obstacle avoidance route diagram.
Electronics 12 04296 g006
Figure 7. Obstacle avoidance rule 1.
Figure 7. Obstacle avoidance rule 1.
Electronics 12 04296 g007
Figure 8. Obstacle avoidance rule 2.
Figure 8. Obstacle avoidance rule 2.
Electronics 12 04296 g008
Figure 9. Obstacle avoidance rule 3.
Figure 9. Obstacle avoidance rule 3.
Electronics 12 04296 g009
Figure 10. Obstacle avoidance rule 4.
Figure 10. Obstacle avoidance rule 4.
Electronics 12 04296 g010
Figure 11. Obstacle avoidance rule 5.
Figure 11. Obstacle avoidance rule 5.
Electronics 12 04296 g011
Figure 12. Obstacle avoidance rule 6.
Figure 12. Obstacle avoidance rule 6.
Electronics 12 04296 g012
Figure 13. Obstacle avoidance rule 7.
Figure 13. Obstacle avoidance rule 7.
Electronics 12 04296 g013
Figure 14. Obstacle avoidance rule 8.
Figure 14. Obstacle avoidance rule 8.
Electronics 12 04296 g014
Figure 15. The obstacle avoidance rule.
Figure 15. The obstacle avoidance rule.
Electronics 12 04296 g015
Figure 16. The monitoring diagram of the AGV scheduling equipment in the packaging workshop.
Figure 16. The monitoring diagram of the AGV scheduling equipment in the packaging workshop.
Electronics 12 04296 g016
Table 1. Selected 10-day data before renovation.
Table 1. Selected 10-day data before renovation.
Production DateTotal Number of Tasks (Items)Average Execution Time of a Task (Minutes)
2 March 202234411.32
4 March 202235610.56
9 March 202235611.14
17 March 202238111.38
21 March 202239511.21
6 April 202241011.89
11 April 202239710.84
12 April 202237911.03
18 April 202241010.21
26 April 202242410.01
Mean value38510.96
Table 2. Selected 10-day data after renovation.
Table 2. Selected 10-day data after renovation.
Production DateTotal Number of Tasks (Items)Average Execution Time of a Task (Minutes)Blocked Times (Failure Times)Proportion (Number of Tasks)Automatic Successful Obstacle Avoidance TimesObstacle Avoidance Success Rate
11 October 20224238.32429.82%42100.00%
13 October 20224058.444110.2%41100.00%
17 October 20224098.70378.98%3595.29%
21 October 20223889.26369.26%36100.20%
24 October 20223868.404110.6%4098.13%
3 November 20224338.32429.67%4197.62%
8 November 20224078.57389.31%38100.00%
16 November 20224098.76409.69%40100.97%
21 November 20223778.87379.72%3697.30%
22 November 20223308.44329.65%3196.88%
Mean3978.6138.429.69%37.9198.67%
Table 3. Comparison of sample average data before and after renovation.
Table 3. Comparison of sample average data before and after renovation.
NameBefore RenovationAfter Renovation
Average number of tasks385397
Average execution time of a task (minutes)10.968.61
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, X.; Rao, W.; Lu, D.; Guo, J.; Guo, T.; Andriukaitis, D.; Li, Z. Obstacle Avoidance for Automated Guided Vehicles in Real-World Workshops Using the Grid Method and Deep Learning. Electronics 2023, 12, 4296. https://doi.org/10.3390/electronics12204296

AMA Style

Li X, Rao W, Lu D, Guo J, Guo T, Andriukaitis D, Li Z. Obstacle Avoidance for Automated Guided Vehicles in Real-World Workshops Using the Grid Method and Deep Learning. Electronics. 2023; 12(20):4296. https://doi.org/10.3390/electronics12204296

Chicago/Turabian Style

Li, Xiaogang, Wei Rao, Dahui Lu, Jianhua Guo, Tianwen Guo, Darius Andriukaitis, and Zhixiong Li. 2023. "Obstacle Avoidance for Automated Guided Vehicles in Real-World Workshops Using the Grid Method and Deep Learning" Electronics 12, no. 20: 4296. https://doi.org/10.3390/electronics12204296

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop