Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (9)

Search Parameters:
Keywords = AGV visual navigation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 1292 KB  
Article
Lightweight Semantic Segmentation for AGV Navigation: An Enhanced ESPNet-C with Dual Attention Mechanisms
by Jianqi Shu, Xiang Yan, Wen Liu, Haifeng Gong, Jingtai Zhu and Mengdie Yang
Electronics 2025, 14(17), 3524; https://doi.org/10.3390/electronics14173524 - 3 Sep 2025
Viewed by 986
Abstract
Efficient navigation of Automated Guided Vehicles (AGVs) in dynamic warehouse environments requires real-time and accurate path segmentation algorithms. However, traditional semantic segmentation models suffer from excessive parameters and high computational costs, limiting their deployment on resource-constrained embedded platforms. A lightweight image segmentation algorithm [...] Read more.
Efficient navigation of Automated Guided Vehicles (AGVs) in dynamic warehouse environments requires real-time and accurate path segmentation algorithms. However, traditional semantic segmentation models suffer from excessive parameters and high computational costs, limiting their deployment on resource-constrained embedded platforms. A lightweight image segmentation algorithm is proposed, built on an improved ESPNet-C architecture, combining Spatial Group-wise Enhance (SGE) and Efficient Channel Attention (ECA) with a dual-branch upsampling decoder. On our custom warehouse dataset, the model attains 90.5% Miou with 0.425 M parameters and runs at ~160 FPS, reducing parameters by ×116–×136 and computational costs by 70–92% in comparison with DeepLabV3+. The proposed model improves boundary coherence by 22% under uneven lighting and achieves 90.2% Miou on the public BDD100K benchmark, demonstrating strong generalization beyond warehouse data. These results highlight its suitability as a real-time visual perception module for AGV navigation in resource-constrained environments and offer practical guidance for designing lightweight semantic segmentation models for embedded applications. Full article
Show Figures

Figure 1

27 pages, 11817 KB  
Article
Navigation Map Construction Based on Semantic Segmentation and Multi-Submap Integration
by Gang Li, Chen Huang, Jian Yu and Hao Luo
Appl. Sci. 2025, 15(7), 3725; https://doi.org/10.3390/app15073725 - 28 Mar 2025
Cited by 1 | Viewed by 1751
Abstract
Traditional visual simultaneous localization and mapping (SLAM) systems typically generate sparse or semi-dense point cloud maps, which are insufficient for effective navigation and path planning. Constructing navigation maps through dense depth estimation generally entails high computational costs, and depth estimation is prone to [...] Read more.
Traditional visual simultaneous localization and mapping (SLAM) systems typically generate sparse or semi-dense point cloud maps, which are insufficient for effective navigation and path planning. Constructing navigation maps through dense depth estimation generally entails high computational costs, and depth estimation is prone to errors in weakly textured regions such as road surfaces. Furthermore, traditional visual SLAM methods rely on local relative coordinate systems, making it extremely challenging to merge mapping results from different coordinate frames in navigation systems lacking global positioning constraints. To address these limitations, this paper presents a multi-submap fusion mapping method based on semantic ground fitting and incorporates global navigation satellite system (GNSS) to provide global positioning information via occupancy grid maps. The method emphasizes the integration of low-cost sensors into a unified system, aiming to create an accurate and real-time mapping solution that is cost-effective and highly applicable. Simultaneously, a multi-submap management mechanism is introduced to dynamically store and load maps, updating only the submaps surrounding the vehicle. This ensures real-time map updates while minimizing computational and storage resource consumption. Extensive testing of the proposed method in real-world scenarios, using a self-built experimental platform, demonstrates that the generated grid map meets the accuracy requirements for navigation tasks. Full article
Show Figures

Figure 1

17 pages, 7440 KB  
Article
Research on Automatic Recharging Technology for Automated Guided Vehicles Based on Multi-Sensor Fusion
by Yuquan Xue, Liming Wang and Longmei Li
Appl. Sci. 2024, 14(19), 8606; https://doi.org/10.3390/app14198606 - 24 Sep 2024
Cited by 3 | Viewed by 2547
Abstract
Automated guided vehicles (AGVs) play a critical role in indoor environments, where battery endurance and reliable recharging are essential. This study proposes a multi-sensor fusion approach that integrates LiDAR, depth cameras, and infrared sensors to address challenges in autonomous navigation and automatic recharging. [...] Read more.
Automated guided vehicles (AGVs) play a critical role in indoor environments, where battery endurance and reliable recharging are essential. This study proposes a multi-sensor fusion approach that integrates LiDAR, depth cameras, and infrared sensors to address challenges in autonomous navigation and automatic recharging. The proposed system overcomes the limitations of LiDAR’s blind spots in near-field detection and the restricted range of vision-based navigation. By combining LiDAR for precise long-distance measurements, depth cameras for enhanced close-range visual positioning, and infrared sensors for accurate docking, the AGV’s ability to locate and autonomously connect to charging stations is significantly improved. Experimental results show a 25% increase in docking success rate (from 70% with LiDAR-only to 95%) and a 70% decrease in docking error (from 10 cm to 3 cm). These improvements demonstrate the effectiveness of the proposed sensor fusion method, ensuring more reliable, efficient, and precise operations for AGVs in complex indoor environments. Full article
(This article belongs to the Collection Advances in Automation and Robotics)
Show Figures

Figure 1

21 pages, 7906 KB  
Article
Visual Servoing Architecture of Mobile Manipulators for Precise Industrial Operations on Moving Objects
by Javier González Huarte and Aitor Ibarguren
Robotics 2024, 13(5), 71; https://doi.org/10.3390/robotics13050071 - 2 May 2024
Cited by 7 | Viewed by 6539
Abstract
Although the use of articulated robots and AGVs is common in many industrial sectors such as automotive or aeronautics, the use of mobile manipulators is not widespread nowadays. Even so, the majority of applications separate the navigation and manipulation tasks, avoiding simultaneous movements [...] Read more.
Although the use of articulated robots and AGVs is common in many industrial sectors such as automotive or aeronautics, the use of mobile manipulators is not widespread nowadays. Even so, the majority of applications separate the navigation and manipulation tasks, avoiding simultaneous movements of the platform and arm. The capability to use mobile manipulators to perform operations on moving objects would open the door to new applications such as the riveting or screwing of parts transported by conveyor belts or AGVs. This paper presents a novel position-based visual servoing (PBVS) architecture for mobile manipulators for precise industrial operations on moving parts. The proposed architecture includes a state machine to guide the process through the different phases of the task to ensure its correct execution. The approach has been validated in an industrial environment for screw-fastening operations, obtaining promising results and metrics. Full article
(This article belongs to the Special Issue Integrating Robotics into High-Accuracy Industrial Operations)
Show Figures

Graphical abstract

22 pages, 5035 KB  
Article
Navigating Unstructured Space: Deep Action Learning-Based Obstacle Avoidance System for Indoor Automated Guided Vehicles
by Aryanti Aryanti, Ming-Shyan Wang and Muslikhin Muslikhin
Electronics 2024, 13(2), 420; https://doi.org/10.3390/electronics13020420 - 19 Jan 2024
Cited by 7 | Viewed by 3013
Abstract
Automated guided vehicles (AGVs) have become prevalent over the last decade. However, numerous challenges remain, including path planning, security, and the capacity to operate safely in unstructured environments. This study proposes an obstacle avoidance system that leverages deep action learning (DAL) to address [...] Read more.
Automated guided vehicles (AGVs) have become prevalent over the last decade. However, numerous challenges remain, including path planning, security, and the capacity to operate safely in unstructured environments. This study proposes an obstacle avoidance system that leverages deep action learning (DAL) to address these challenges and meet the requirements of Industry 4.0 for AGVs, such as speed, accuracy, and robustness. In the proposed approach, the DAL is integrated into an AGV platform to enhance its visual navigation, object recognition, localization, and decision-making capabilities. Then DAL itself was introduced to combine the work of You Only Look Once (YOLOv4), speeded-up robust features (SURF), and k-nearest neighbor (kNN) and AGV control in indoor visual navigation. The DAL system triggers SURF to differentiate two navigation images, and kNN is used to verify visual distance in real time to avoid obstacles on the floor while searching for the home position. The testing findings show that the suggested system is reliable and fits the needs of advanced AGV operations. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

17 pages, 3676 KB  
Article
Pre-Inpainting Convolutional Skip Triple Attention Segmentation Network for AGV Lane Detection in Overexposure Environment
by Zongxin Yang, Xu Yang, Long Wu, Jiemin Hu, Bo Zou, Yong Zhang and Jianlong Zhang
Appl. Sci. 2022, 12(20), 10675; https://doi.org/10.3390/app122010675 - 21 Oct 2022
Cited by 4 | Viewed by 2323
Abstract
Visual navigation is an important guidance method for industrial automated guided vehicles (AGVs). In the actual guidance, the overexposure environment may be encountered by the AGV lane image, which seriously reduces the accuracy of lane detection. Although the image segmentation method based on [...] Read more.
Visual navigation is an important guidance method for industrial automated guided vehicles (AGVs). In the actual guidance, the overexposure environment may be encountered by the AGV lane image, which seriously reduces the accuracy of lane detection. Although the image segmentation method based on deep learning is widely used in lane detection, it cannot solve the problem of overexposure of lane images. At the same time, the requirements of segmentation accuracy and inference speed cannot be met simultaneously by existing segmentation networks. Aiming at the problem of incomplete lane segmentation in an overexposure environment, a lane detection method combining image inpainting and image segmentation is proposed. In this method, the overexposed lane image is repaired and reconstructed by the MAE network, and then the image is input into the image segmentation network for lane segmentation. In addition, a convolutional skip triple attention (CSTA) image segmentation network is proposed. CSTA improves the inference speed of the model under the premise of ensuring high segmentation accuracy. Finally, the lane segmentation performance of the proposed method is evaluated in three image segmentation evaluation metrics (IoU, F1-score, and PA) and inference time. Experimental results show that the proposed CSTA network has higher segmentation accuracy and faster inference speed. Full article
Show Figures

Figure 1

13 pages, 1625 KB  
Article
Obstacle Detection for Autonomous Guided Vehicles through Point Cloud Clustering Using Depth Data
by Micael Pires, Pedro Couto, António Santos and Vítor Filipe
Machines 2022, 10(5), 332; https://doi.org/10.3390/machines10050332 - 2 May 2022
Cited by 10 | Viewed by 5285
Abstract
Autonomous driving is one of the fastest developing fields of robotics. With the ever-growing interest in autonomous driving, the ability to provide robots with both efficient and safe navigation capabilities is of paramount significance. With the continuous development of automation technology, higher levels [...] Read more.
Autonomous driving is one of the fastest developing fields of robotics. With the ever-growing interest in autonomous driving, the ability to provide robots with both efficient and safe navigation capabilities is of paramount significance. With the continuous development of automation technology, higher levels of autonomous driving can be achieved with vision-based methodologies. Moreover, materials handling in industrial assembly lines can be performed efficiently using automated guided vehicles (AGVs). However, the visual perception of industrial environments is complex due to the existence of many obstacles in pre-defined routes. With the INDTECH 4.0 project, we aim to develop an autonomous navigation system, allowing the AGV to detect and avoid obstacles based in the processing of depth data acquired with a frontal depth camera mounted on the AGV. Applying the RANSAC (random sample consensus) and Euclidean clustering algorithms to the 3D point clouds captured by the camera, we can isolate obstacles from the ground plane and separate them into clusters. The clusters give information about the location of obstacles with respect to the AGV position. In experiments conducted outdoors and indoors, the results revealed that the method is very effective, returning high percentages of detection for most tests. Full article
Show Figures

Figure 1

23 pages, 5087 KB  
Article
Digital Twin for Automatic Transportation in Industry 4.0
by Alberto Martínez-Gutiérrez, Javier Díez-González, Rubén Ferrero-Guillén, Paula Verde, Rubén Álvarez and Hilde Perez
Sensors 2021, 21(10), 3344; https://doi.org/10.3390/s21103344 - 11 May 2021
Cited by 97 | Viewed by 10697
Abstract
Industry 4.0 is the fourth industrial revolution consisting of the digitalization of processes facilitating an incremental value chain. Smart Manufacturing (SM) is one of the branches of the Industry 4.0 regarding logistics, visual inspection of pieces, optimal organization of processes, machine sensorization, real-time [...] Read more.
Industry 4.0 is the fourth industrial revolution consisting of the digitalization of processes facilitating an incremental value chain. Smart Manufacturing (SM) is one of the branches of the Industry 4.0 regarding logistics, visual inspection of pieces, optimal organization of processes, machine sensorization, real-time data adquisition and treatment and virtualization of industrial activities. Among these tecniques, Digital Twin (DT) is attracting the research interest of the scientific community in the last few years due to the cost reduction through the simulation of the dynamic behaviour of the industrial plant predicting potential problems in the SM paradigm. In this paper, we propose a new DT design concept based on external service for the transportation of the Automatic Guided Vehicles (AGVs) which are being recently introduced for the Material Requirement Planning satisfaction in the collaborative industrial plant. We have performed real experimentation in two different scenarios through the definition of an Industrial Ethernet platform for the real validation of the DT results obtained. Results show the correlation between the virtual and real experiments carried out in the two scenarios defined in this paper with an accuracy of 97.95% and 98.82% in the total time of the missions analysed in the DT. Therefore, these results validate the model created for the AGV navigation, thus fulfilling the objectives of this paper. Full article
(This article belongs to the Special Issue Smart Manufacturing: Advances and Challenges)
Show Figures

Figure 1

25 pages, 5713 KB  
Article
MU R-CNN: A Two-Dimensional Code Instance Segmentation Network Based on Deep Learning
by Baoxi Yuan, Yang Li, Fan Jiang, Xiaojie Xu, Yingxia Guo, Jianhua Zhao, Deyue Zhang, Jianxin Guo and Xiaoli Shen
Future Internet 2019, 11(9), 197; https://doi.org/10.3390/fi11090197 - 13 Sep 2019
Cited by 17 | Viewed by 6222
Abstract
In the context of Industry 4.0, the most popular way to identify and track objects is to add tags, and currently most companies still use cheap quick response (QR) tags, which can be positioned by computer vision (CV) technology. In CV, instance segmentation [...] Read more.
In the context of Industry 4.0, the most popular way to identify and track objects is to add tags, and currently most companies still use cheap quick response (QR) tags, which can be positioned by computer vision (CV) technology. In CV, instance segmentation (IS) can detect the position of tags while also segmenting each instance. Currently, the mask region-based convolutional neural network (Mask R-CNN) method is used to realize IS, but the completeness of the instance mask cannot be guaranteed. Furthermore, due to the rich texture of QR tags, low-quality images can lower intersection-over-union (IoU) significantly, disabling it from accurately measuring the completeness of the instance mask. In order to optimize the IoU of the instance mask, a QR tag IS method named the mask UNet region-based convolutional neural network (MU R-CNN) is proposed. We utilize the UNet branch to reduce the impact of low image quality on IoU through texture segmentation. The UNet branch does not depend on the features of the Mask R-CNN branch so its training process can be carried out independently. The pre-trained optimal UNet model can ensure that the loss of MU R-CNN is accurate from the beginning of the end-to-end training. Experimental results show that the proposed MU R-CNN is applicable to both high- and low-quality images, and thus more suitable for Industry 4.0. Full article
(This article belongs to the Special Issue Manufacturing Systems and Internet of Thing)
Show Figures

Figure 1

Back to TopTop