You are currently viewing a new version of our website. To view the old version click .
Electronics
  • Editor’s Choice
  • Article
  • Open Access

2 January 2025

Autonomous Forklifts: State of the Art—Exploring Perception, Scanning Technologies and Functional Systems—A Comprehensive Review

,
,
,
,
and
1
Department of Electronic & Computer Engineering, University of Limerick, V94 T9PX Limerick, Ireland
2
Autonomous Systems Group, Combilift, H18 VP65 Monaghan, Ireland
3
Lero, The Research Centre for Software, Tierney Building, University of Limerick, V94 NYD3 Limerick, Ireland
4
School of Engineering, University of Limerick, The Lonsdale Building, V94 T9PX Limerick, Ireland
This article belongs to the Special Issue Advancements in Connected and Autonomous Vehicles

Abstract

This paper presents a comprehensive overview of cutting-edge autonomous forklifts, with a strong emphasis on sensors, object detection and system functionality. It aims to explore how this technology is evolving and where it is likely headed in both the near and long-term future, while also highlighting the latest developments in both academic research and industrial applications. Given the critical importance of object detection and recognition in machine vision and autonomous vehicles, this area receives particular attention. The article provides an in-depth summary of both commercial and prototype forklifts, discussing key aspects such as design features, capabilities and benefits, and offers a detailed technical comparison. Specifically, it clarifies that all available data pertains to commercially available forklifts. To obtain a better understanding of the current state-of-the-art and its limitations, the analysis also reviews commercially available autonomous forklifts. Finally, this paper includes a comprehensive bibliography of research findings in this field.

1. Introduction

The literature review conducted in this paper reports a comprehensive analysis of the latest advances in perception and scanning technology related to autonomous forklift systems. This paper offers a complete evaluation of the present state-of-the-art techniques and developments in machine vision for independent forklifts. It encompasses various aspects such as the identification of objects, localization, mapping and path planning, emphasizing the significant challenges and recent advancements in each area. Additionally, emerging technologies such as deep learning, sensor fusion and real-time perception are deliberated in the perspective of their impact on boosting the capabilities of independent forklifts. Currently, autonomous vehicles are a hot topic as this era is currently witnessing a great deal of development in the field of self-driving that might possess the capability to revolutionize our lives in terms of safety, reliability and efficiency [1]. Transporting and delivering goods to storage places has been made easy by these promising vehicles, resulting in improvement in productivity.
Over the last two decades, there have been exceptional and unprecedented trends in the field of self-driving cars. The idea of self-driving cars first appeared in the 1920s, when the possibility of making an autonomous vehicle began to be seriously considered, as explained by Ondru et al. as in [2]. It also discussed the functioning of autonomous vehicles.The first promising endeavors of autonomous vehicles, although of course not fully autonomous, were in the 1950s, with the initial autonomous vehicles emerging in the 1980s through the efforts of Carnegie Mellon University’s Navlab [3].As the field progressed, multiple prominent companies and research centers became actively involved in its development. The automotive industry is presently advancing sensor-based solutions to enhance vehicle safety. These systems, known as Advanced Driver Assistance Systems (ADASs), utilize an array of sophisticated sensors as confirmed in [4].
In this paper, a review of different types of sensors such as Time of Flight (ToF) and LiDARs has been presented to illustrate the principle of sensor use in autonomous vehicles, and to equip the reader with the basic knowledge necessary to understand this technology. Additionally, this paper will explain the fundamentals of modern advancements in the key features of self-driving technology. Self-driving forklift trucks are employed in warehouses to move goods between different locations. These autonomous vehicles have already had a considerable effect on logistics operations, with most current self-driving forklifts depending on advanced sensors, as well as vision and Geographic Information System (GIS)-based guidance systems. In addition, the scope of this paper also considers automated forklifts with multi-level load actuation in warehouse racking systems, which require vertical lifting capabilities to operate effectively. This excludes more basic floor-level automated guided vehicles (AGVs) that do not perform vertical lifting into racking, as they serve different operational purposes.
This paper is structured as follows: Section 1 introduces the focus and objectives of the research, providing an overview of the current advancements in autonomous forklifts. Section 2 reviews related work, highlighting key academic contributions and solutions for developing autonomous vehicle prototypes. Section 3 explains the architecture of autonomous vehicle components, detailing their perception, localization, planning and control systems. Section 4 focuses on machine vision and object detection techniques, emphasizing models used in autonomous forklifts and deep learning-based detection methods, respectively. Section 5 examines industry applications and major manufacturers of autonomous forklifts. Finally, the discussion and conclusions are presented in Section 6 and in Section 7 respectively.

Methodology and Criteria

To ensure a comprehensive and unbiased analysis, an approach was employed to select relevant research papers. The search was conducted using established academic indexing databases, including Scopus, IEEE Xplore, SpringerLink, ACM Digital Library and ScienceDirect, focusing on keywords such as “Autonomous Forklifts”, “ machine Vision”, “Perception Technology”, “Scanning systems”, “Object Detection” and “Forklift Automation”. In addition, technical reports and white papers from official company websites indexed by these platforms were included to provide industry insights. Inclusion criteria prioritized peer-reviewed journal articles, conference papers and technical reports. Studies were selected based on their relevance to the main themes of the manuscript, with a particular emphasis on technological advancements, challenges and industry applications in the field of autonomous vehicle perception and scanning systems. Papers lacking sufficient technical detail or falling outside this scope were excluded. This methodology was designed to provide a focused yet comprehensive overview of the current state of the art in this domain.

3. Architecture of AV Components and AV Categorization

Figure 8 provides a summary of the key content in this section. Figure 9 clarifies the different levels of autonomous driving, clarifying specifically the detail of intervention required with automotive vehicles. These levels are described as follows: Autonomous vehicles are categorized into six distinct levels ranging from 0 to 5, each representing different levels of self-driving technology they possess, as illustrated below.
Figure 8. Summary of Section 3’s content.
Figure 9. Levels of automation in AVs—SAE levels.
As presented in [1,71] by Kuutti et al., autonomous vehicles usually contain five functional components as shown in Figure 9.
  • Perception;
  • Localization;
  • Planning;
  • Control;
  • System management.
Perception serves to sense the vehicle’s surroundings, identifying significant objects such as obstacles. Meanwhile, localization creates a map of the surroundings and accurately determines the position of the vehicle. Additionally, planning utilizes the information from perception and localization to chart the vehicle’s overarching actions, encompassing route selection, lane changes and target speed settings. In addition, the control module supervises the execution of particular tasks, such as steering, acceleration and braking. Lastly, system management monitors the functioning of all components and offers a user interface for interaction between humans and machines.
The intricate operation of autonomous vehicles consists of interconnected components, as illustrated in Figure 10.
Figure 10. Typical autonomous vehicle concept.
The control system serves as the vehicle’s perspective brain. This central intelligence works in tandem with a network of sensors, including cameras, radar and LiDAR, which collectively gather real-time information about the vehicle’s surroundings. These sensory data are then processed by perception algorithms, enabling the vehicle to decipher and comprehend its environment. Following perception, the vehicle’s onboard artificial intelligence embarks on the critical task of planning and charting a course of action based on the assimilated data. This planning process involves decision-making algorithms that analyze factors such as traffic conditions, road obstacles and pedestrian movements. The ultimate goal is to generate a precise sequence of actions that will enable the vehicle to navigate its surroundings with safety and efficiency.
Operating in blended environments demands that autonomous vehicles flawlessly negotiate a diverse range of driving conditions, from bustling city streets to high-speed highways and unpredictable scenarios. To master this adaptability, a fusion of perception and planning is essential, equipping the vehicle to traverse varied landscapes with agility and accuracy.
Vehicle-to-vehicle (V2V) communication is an integral component of the autonomous vehicle revolution, enabling real-time information exchange among vehicles. This interconnectedness fosters enhanced road safety and efficiency. V2V communication facilitates collaborative decision-making, empowering vehicles to anticipate and respond to the actions of their counterparts on the road. For example, if a vehicle detects an obstacle or abrupt braking, it can instantly transmit this information to surrounding vehicles, allowing them to adapt their trajectories accordingly. The seamless interplay between control systems, sensors, perception algorithms, planning strategies, blended environment adaptability and V2V communication lies at the heart of successful autonomous vehicle implementation. This integration enables autonomous vehicles to adapt to dynamically changing surroundings and make informed real-time decisions.

3.1. Sensor Fusion and Perception

The integration of multiple sensors stands as a cornerstone in achieving robust perception systems across various technological domains. In fields like autonomous vehicles, robotics and industrial automation, the fusion of sensors such as LiDAR, radar and cameras is essential for understanding the environment.
By combining data from diverse sensors, these systems can compensate for individual sensor limitations, providing a more comprehensive and accurate understanding of the surroundings. This synergy not only improves the reliability of perception but also contributes to heightened safety and efficiency in complex operational scenarios. The seamless integration of multiple sensors represents a key advancement, ushering in a new era of intelligent and adaptive systems across various technological applications. In the article [72] by D. J. Yeong et al., the authors evaluate the functionalities and efficiency of sensors commonly utilized in autonomous vehicles and explores sensor fusion techniques.
The interaction between sensors and a controller in a machine vision system is shown in Figure 11.
Figure 11. The interaction between sensors and a controller in a machine vision system [73].
Sensors, such as cameras, LiDAR and other specialized devices, capture raw data from the surrounding environment. These data are then relayed to the controller, a central processing unit tasked with interpreting and deciphering the incoming signals. Employing sophisticated algorithms and programming, the controller extracts meaningful insights, facilitating real-time analysis and response. This dynamic exchange between sensors and the controller is crucial in applications like object recognition, tracking and automation.
The fusion of data from cameras, LiDAR and other sensors represents a cutting-edge approach to enhancing perception in various technological domains. By integrating information from diverse sources, fusion techniques create a more comprehensive and accurate understanding of the environment. In applications such as autonomous vehicles, robotics and surveillance systems, this multi-sensor fusion enables a more robust and adaptive perception system. The synergy of camera, LiDAR and other sensor data not only improves object recognition but also enhances overall system reliability, paving the way for more sophisticated and capable technologies.

3.2. Real-Time Perception Algorithms and Architectures

As confirmed by [38], real-time perception algorithms and architectures are pivotal components in the realm of computer vision and artificial intelligence. These sophisticated systems enable machines to interpret and understand their surroundings instantaneously, providing a seamless interface between the digital and physical worlds. Whether applied in autonomous vehicles, surveillance systems or augmented reality, real-time perception algorithms harness advanced computational methods to rapidly process vast amounts of visual data. These algorithms, coupled with purpose-built architectures, empower machines to make split-second decisions, enhancing efficiency and responsiveness across diverse applications.

4. Deep Learning-Based Object Detection Techniques for Machine Visions

As widely recognized, machine vision plays a vital role in the overall development and deployment of autonomous vehicles, contributing to various aspects of perception, decision-making and control.
Figure 12 below clarifies the highlights and presents the most significant content. Recent important contributions such as [74,75] often involve advanced techniques such as computer vision, machine learning and sensor fusion. Researchers such as [76] presented a review of the perception systems and simulators for autonomous vehicles.
Figure 12. Section 4’s highlights: key points summarized.
There is limited research in the field of autonomous forklifts, as seen in studies such as [77] (2020), [18] (2009), and [20] (2010), which indicate the early stages of research on this technology. Additionally, [11] focuses on vehicles handling wooden pallets, regardless of the type of load.
Moreover, [74] also made a contribution, wherein the vehicle can handle loads of the same size and shape. This underscores the significance of this field.
Figure 13 offers a comprehensive overview of the evolutionary landscape of deep learning methods for detection. Another significant advancement in object detection came with the introduction of You Only Look Once (YOLO), a method that divides images into a grid and simultaneously predicts bounding boxes and class probabilities for each region. The Single-Shot Multibox Detector (SSD) further enhanced real-time object detection by more effectively handling objects of varying shapes and sizes. The accompanying figure highlights these advancements, showcasing the remarkable contributions of YOLO, R-CNN and SSD in the continuous evolution of deep learning-based object detection techniques across a wide range of applications.
Figure 13. Deep learning object detection methods and algorithms.
Advanced learning methods have emerged as the cutting-edge approach for object detection, surpassing traditional computer vision techniques. These approaches leverage deep neural networks, specifically convolutional neural networks (CNNs), to autonomously capture and extract complex features from visual data. Object detection techniques based on deep learning not only attain impressive accuracy but also provide improved efficiency and speed. They have made substantial advancements in the realm of computer vision, facilitating diverse applications such as self-driving cars, surveillance systems and image identification.
In this paragraph, we will explore some of the key advanced learning methods employed in object detection and their contributions to the field. Object detection can generally be categorized into two-stage and one-stage algorithms. Further detail will be provided below.
Unfortunately, specifications on the precise machine vision model type utilized by each company is unattainable. This is due to several factors:
  • Companies often employ a hybrid approach, utilizing various models tailored to the specific tasks and demands of their autonomous forklifts.
  • Given the continuous evolution of models, companies may periodically update their systems with newer, enhanced iterations.
  • Detailed specifications concerning the models may be deemed proprietary information, thus not publicly disclosed.
Nevertheless, overarching insights into the typical model categories utilized in autonomous forklifts can be provided. Object Detection: This is a function for autonomous forklifts, enabling the recognition and localization of items in their surroundings, such as pallets, obstacles and individuals.
Prominent models for object detection that can be used for autonomous forklifts include YOLO, Faster R-CNN, Mask R-CNN, EfficientDet, Detection Transformer (DETR) and SSD. There are several popular algorithms and techniques used for object detection, such as convolutional neural networks (CNNs), and region-based convolutional neural networks (R-CNNs). These techniques, along with other variations and advancements, are widely used in the development of computer vision systems for autonomous forklifts.

4.1. Object Detection and Recognition Algorithms

4.1.1. Single-Stage Object Detectors

Deep learning-based methods have revolutionized computer vision tasks like object detection. Among these, regression/classification-based techniques such as YOLO, SSD, DETR (Detection Transformer) and CornerNet-2019 have emerged as prominent solutions. CornerNet-2019 is an object detection model, as shown in Figure 14, that utilizes a single pass through a convolutional neural network to simultaneously identify object bounding boxes and their corresponding corners. This approach eliminates the need for multiple stages, making CornerNet a one-stage object detection method.
Figure 14. The architecture structure of CornerNet 2019 inspired by [78].
The YOLO model is one of the most important algorithms in this field. YOLO, which stands for “You Only Look Once”, is an object detection algorithm commonly used in computer vision and image processing tasks. It was launched in 2016 by Joseph Redmon et al. [79].
As per [80], the mechanism details of YOLO by are clarified below: Each bounding box is predicted based on the entire image features. The algorithm predicts a bounding box for every class and detects whether objects are present throughout the image. In YOLO, the complete image is segmented into a grid of size SxS. Each grid cell is assigned a confidence value based on the predicted B bounding box. Object presence is determined by the confidence value. An object in a grid with a confidence value of zero is absent. In order to detect an object accurately, it is essential for the confidence score to closely match or equal the predicted bounding box and the real object. Probabilities associated with classes have also been anticipated via every grid cell. To identify objects within an image, we utilize a bounding box with class probabilities exceeding a specific confidence threshold. Ultralytics presents YOLO-v5 and YOLO-v8, with the latter demonstrating impressive real-time performance.
Figure 15 illustrates the architecture. The latest addition to the YOLO family was released in February 2024. YOLOv9 was developed by a different group of developers, including Chien-Yao Wang, I-Hau Yeh, and Hong-Yuan Mark Liao, building upon the advances of its predecessors to deliver improved performance in object detection tasks [81]. Ultralytics released YOLO-v5 alongside YOLO-v8. While the formal paper release date is still pending, and additional features are yet to be incorporated into the YOLO-v8 repository, initial comparisons with its predecessors reveal its superiority, positioning it as the new state-of-the-art in the YOLO series [82].
Figure 15. YOLO architecture pipeline structure.
The YOLO architecture is explained in Figure 15. The specific choice of technique depends on factors such as the requirements of the application, the available computational resources and the desired trade-off between accuracy and real-time performance. Moreover, RetinaNet, a one-stage object detection model, was introduced in the paper “Focal Loss for Dense Object Detection” by Tsung-Yi Lin, Priya Goyal, Ross Girshick and Kaiming He, which was presented at ICCV in 2017. Figure 16 illustrates the network architecture of RetinaNet. The one-stage object detectors such as SSD can be found in Figure 17.
Figure 16. The structure of the RetinaNet network architecture.
Figure 17. The structure of the SSD architecture.
The strucrure of the below SSD model inspired by [83].

4.1.2. Two-Stage Object Detectors

Two-stage object detectors are a class of deep learning models used in computer vision to identify and locate objects within an image. The process is divided into two stages. The first stage, known as the region proposal network (RPN), analyzes the image to generate a set of candidate regions or proposals where objects are likely to be found. The second stage takes these proposals and refines them by classifying the objects and precisely regresses their bounding boxes. Two-stage object detectors, such as Mask R-CNN, are explained in detail below.
Region-based convolutional neural network (R-CNN) is a type of two-stage region proposal-based object detector, such as Mask R-CNN. The utilization of selective search was implemented to generate 2000 region proposals from test images, with only a restricted number of these proposals being retained. The next step involved the application of CNN to generate a feature vector of a constant length from each of the chosen region proposals. A linear SVM was employed to assign weights to the extracted feature vectors for object classification within each region proposal. To mitigate localization errors, a linear regression model was subsequently utilized to expect the boundaries of the bounding box.
The structure of the Mask R-CNN model is shown in Figure 18.
Figure 18. The structure of the Mask R-CNN architecture.
As the architecture of the structure is inspired by Figure [84]. Mask R-CNN has become a prevalent tool in computer vision applications that demand intricate instance segmentation, including medical image analysis, autonomous vehicle operation and various other scenarios where precise object outlining is essential. Key milestones in MAsk R-CNN development are shown in Table 6.
Table 6. Key milestones in Mask R-CNN development.
Table 7 shows a selection of the most cited and important articles in academia as per the author’s research.
Table 7. Selected articles in academia covering various deep learning algorithms.

4.2. Autonomous Vehicle Technology: Advantages for Manufacturer

Autonomous forklifts are equipped with technology that enables them to operate independently from human control. Also, machine vision algorithms play a critical role in allowing these forklifts to navigate, perceive their surroundings, and make decisions based on what they see. In this section, we present a deep dive into the functionality of autonomous vehicle technology delving into the intricacies of autonomous vehicle technology. At the heart of this technology lies a fusion of sensors, cameras, radar and LiDAR, working in unison to perceive the surrounding environment. Advanced algorithms process these data in real-time, making critical decisions regarding steering, acceleration and braking [104].
As will be explained in further detail below in Section 4, partially autonomous vehicles with varying levels of automation are currently accessible worldwide to provide humanity with a more confident and safer way of delivering their services.

4.3. Machine-Readable Codes for Autonomous Vehicles

QR codes and ArUco markers are visual markers used in computer vision applications, and they can also play a role in enhancing the capabilities of autonomous vehicles.
In the domain of machine vision and autonomous vehicles, both QR codes and ArUco markers serve as visual markers or codes that can be recognized by cameras and sensors to aid in localization, navigation and object recognition.
Below is a brief comparison of the two types.

4.3.1. QR Codes

QR codes are commonly used for various purposes in the context of autonomous forklifts. Some examples are as follows:
Navigation: QR codes can serve as navigation markers or waypoints for autonomous forklifts. By placing QR codes strategically in the environment, the forklift’s sensors can detect and interpret them to determine its location and orientation. This helps the forklift to navigate accurately and perform tasks efficiently, such as finding the right storage location or following a predefined path.
Object Recognition: QR codes can be used to identify objects or pallets in a warehouse or manufacturing facility. Each item can be labeled with a unique QR code, which can store information about the item, such as its type, weight, destination or handling instructions. When the forklift encounters a QR code, it can scan and interpret it to gather relevant information, allowing it to handle the object appropriately. QR codes are a versatile tool that can enhance the capabilities of autonomous forklifts by providing critical information, aiding navigation and ensuring safe and efficient operations in various industrial settings.
Table 8 provides multiple short abstracts of selected references using QR codes in the field of object detection.
Table 8. Selected references using QR codes.

4.3.2. ArUco Markers

ArUco markers are a type of augmented reality markers that are commonly used for computer vision applications. These markers are essentially patterns that are easily recognizable by computer vision systems, making them useful in various tracking and localization tasks. ArUco markers are often used in the field of robotics and autonomous systems as in [111]. Table 9 provides multiple short abstracts of selected references using AruCo marker codes in the field of object detection.
Table 9. Summary of articles discussing ArUco marker applications.

5. Industry Applications and Manufacturers

Autonomous forklifts are equipped with technology that enables them to operate independently from human control. Also, machine vision algorithms play a critical role in allowing these forklifts to navigate, perceive their surroundings, and make decisions based on what they see.
This section is summarized in Figure 19, which highlights the core components discussed in this chapter. The figure provides a symbolic overview of the section.
Figure 19. Summary of the content in Section 5.
Machine learning, on the other hand, plays a pivotal role, allowing the vehicle to continuously adapt and improve its performance based on accumulated experience. Connectivity features facilitate communication between autonomous vehicles and infrastructure, further enhancing safety and efficiency. This examination of autonomous technology emphasizes the smooth fusion of hardware and software, opening the door to a future in which self-driving vehicles effortlessly integrate into our everyday routines.

5.1. Autonomous Guided Vehicles (AGVs) and Autonomous Mobile Robots (AMRs)

Autonomous guided vehicles (AGVs) and autonomous mobile robots (AMRs) are essential tools for modern industrial automation and logistics. Both technologies streamline material handling, inventory management, and intra-facility transportation. While they share the common goal of autonomous navigation, AGVs and AMRs differ significantly in their operational methods, flexibility, and deployment scenarios. These distinctions set them apart from the autonomous vehicles often seen on roads and highways.
This section is summarized in Table 10, which presents a comparison between automated guided vehicles (AGVs) and autonomous mobile robots (AMRs) in the context of industrial automation. The figure highlights the key differences in functionality, flexibility, and technological integration, providing a clear overview of their respective roles and advantages in modern industrial applications
Table 10. Comparison of AGVs and AMRs in industrial automation inspired by [118].

5.2. Companies Manufacturing Autonomous Forklifts

This research also focuses on automated forklifts designed for multi-level load handling in warehouse racking systems, emphasizing their essential vertical lifting capabilities. Conversely, floor-level automated guided vehicles (AGVs), which do not support vertical lifting into racking systems, are excluded from consideration as they serve different functional purposes.
There are several companies that provide autonomous forklifts or autonomous guided vehicles designed for material handling and warehouse automation. The vehicles described below are specifically designed to handle palletized loads.
Table 11 lists the leading companies in the field of self-driving forklifts, which provide warehouse solutions and fleet management solutions in the market, along with the country in which they originate.
Table 11. Notable forklift companies adopting autonomous technologies.
Further details about the pioneer company’s products manufactured in this domain can be found below:
  • Toyota Material Handling: Toyota is a well-known manufacturer of forklifts, and they also offer autonomous forklifts under the brand name “Autopilot”. Their AGVs are designed to work alongside human operators or autonomously in warehouses and distribution centers.
  • KION Group: KION Group is a leading provider of intralogistics solutions and offers autonomous forklifts through their subsidiary, “Linde Material Handling”. Their AGVs are designed to navigate through warehouses and perform various material handling tasks without human intervention.
  • Hyster-Yale Materials Handling: Hyster-Yale is another major manufacturer of forklifts and material-handling equipment. They offer autonomous solutions under the brand name “Hyster Robotic” or “Balyo” (their partner in autonomous technology). Their AGVs can be retrofitted onto existing forklifts, enabling them to operate autonomously.
  • Seegrid: Seegrid specializes in autonomous mobile robots for material handling. They offer a range of autonomous pallet trucks and tow tractors that can navigate complex environments using vision-based technology. These AGVs are designed to optimize workflows and increase efficiency in warehouses and distribution centers.
  • Aethon (acquired by ST Engineering): Aethon, now part of ST Engineering, offers autonomous mobile robots for material transportation and delivery within hospitals and industrial facilities. While not strictly forklifts, their AGVs are capable of autonomously transporting loads and can be customized to handle specific tasks.
Above are just a few examples, and the autonomous forklift market is continuously developing with new contestants entering the industry. Ongoing and continuous research is required in this field as the market for autonomous vehicles is ever-evolving.
These companies are at the forefront of developing autonomous forklift technology, aiming to improve efficiency and safety in warehouse and logistics operations.
Below, different types of autonomous forklifts are demonstrated to review all ideas and proposals about this technology in industrial and academic domains. In this section, the most prominent contributions drawn from various articles and conference papers in the field of design and advancement of autonomous forklifts are presented. Problems and challenges are outlined as well. These contributions, along with many others, contribute to the advancement of autonomous forklift technology, making it more capable, efficient, safe and sustainable.
The companies listed are well-known manufacturers in the forklift industry, but whether they are the most well known or top-selling can vary depending on different factors such as geographical location, market segment and specific requirements of customers. These companies have established themselves as leaders in the forklift market, but there are other manufacturers as well who may also be prominent in certain regions or sectors. Additionally, the status of being the top-selling manufacturer can change over time due to various factors such as technological advancements, market trends and competitive strategies.
The following are selected companies that manufacture autonomous forklifts in the market:
  • The K-MATIC automated forklift uses smart software to manage tasks, navigate tight spaces and safely interact with other warehouse equipment. Three-dimensional cameras and laser sensors ensure smooth operation and fast adaptation to changing environments.
  • Seegrid’s Palion Lift RS1 AMR delivers comprehensive automation for low-lift processes, ensuring safe and reliable material handling from storage to staging areas and work cells. With its advanced Smart Path sensing capabilities and 360° safety coverage, the RS1 performs exceptionally well in complex enterprise environments.
  • Crown Equipment is a forklift that combines manual and automatic control in one machine. It has a single mast that extends forward and can be switched between driver-operated and self-driving modes with a simple control.
  • OTTO Motors, acquired by Rockwell Automation in 2021, makes industrial self-driving vehicles designed for heavy-duty material transport. An autonomous forklift is engineered to facilitate the movement of pallets between stands, machines and various floor locations.
    Utilizing specialized sensors to ensure that the payload remains within safe limits, the OTTO Lifter autonomously and reliably assesses and picks up pallets, even when they are misplaced or wrapped in stretched film. Five 3D cameras aid in detecting overhanging objects, as well as in pallet tracking and docking.
  • Vecna Robotics builds AMRs that can handle a variety of tasks, including towing trailers, moving pallets and performing machine tending.
  • Balyo offers a fleet of AMRs designed for the agile and efficient movement of goods in warehouses and distribution centers. The VENNY Robotic by Balyo truck, equipped with unique 3D pallet detection.
  • The Toyota RAE250 Autopilot is a top-of-the-line forklift designed specifically for warehouse automation. It combines the trusted design of a regular Toyota reach truck with a built-in navigation and safety system, allowing it to operate autonomously. This means that users receive the reliability of a proven forklift with the efficiency and cost-saving benefits of automation.
  • Hyster Robotic CB is a self-driving truck that does not need any special adjustments to roads or its surroundings. It uses a laser system (LiDAR) to find its location and avoid obstacles, relying on the existing environment itself as a giant map.
  • Combilift: The Combi-AGT offers flexibility by operating autonomously in guided aisles, functioning in free-roaming mode, or being manually driven. Figure 20 depicts the autonomous-guided forklift produced by Combilift.
    Figure 20. The Combi-AGT autonomous-guided forklift truck by combilift.
Table 12 below shows various forklift manufacturers and the different types of sensors used for navigation.
Table 12. Forklift manufacturers and types of sensors used for navigation.

5.3. Key Industry Applications

One of the most important advantages of using autonomous vehicles is that they are not subject to human control. Firstly, humans may become tired or fatigued during long working hours. Furthermore, other factors such as losing focus while driving or talking on the phone are not present in autonomous vehicles, and therefore, the use of autonomous vehicles can be considered much safer than human-controlled vehicles.
As will be explained in further detail below in Section 4, partially autonomous vehicles with varying levels of automation are currently accessible worldwide to provide humanity with a more confident and safer way of delivering this service.
Many expected advantages will be available if autonomous vehicles are used. The first will be a reduction in accidents as a result of minimizing the proportion of human interference with the vehicle while driving. Secondly, the exertion of driving would be alleviated for vehicle drivers, allowing them to perform other tasks or enjoy a moment of respite. Regarding vehicles used for transporting goods, it will be possible to operate those vehicles for longer hours than vehicles driven by humans.

5.4. Performance Benchmarks and Empirical Insights into Autonomous Forklifts

As per a new study [119], autonomous forklifts and intralogistics systems have significantly enhanced operational efficiency and reduced costs across various industries. A focused investigation into small- and medium-sized enterprises (SMEs) within Romania’s forklift industry highlights the transformative impact of artificial intelligence (AI) integration. Table 13 summarizes the influence of AI implementation on key business metrics, showcasing substantial benefits in operational performance and resource optimization.
Table 13. Impact of AI implementation on key business metrics as per [119].

6. Discussion

For a long time, the technology of self-driving vehicles was a fantasy, and now the dream has come true as it has become actual and realistic. These vehicles can provide services to humanity such as delivering orders or carrying goods to storage places or places for sale.

6.1. Autonomous Forklifts: Economic Impact and Ethical Considerations

While an in-depth examination of empirical validation, economic impact and ethical considerations of autonomous forklifts is beyond the scope of this research, their importance warrants a brief overview. Autonomous forklifts leverage advanced robotics, AI and sensor technologies to revolutionize material-handling processes. Rigorous testing in real-world conditions has demonstrated their reliability, efficiency and safety, as well as their adaptability to diverse warehouse layouts and seamless integration with existing logistics systems.
From an economic perspective, these technologies offer substantial benefits, including reduced labor costs, enhanced operational efficiency and minimized inventory damage, significantly improving profitability and competitiveness. However, ethical considerations, such as potential job displacement, equitable distribution of economic benefits and ensuring safe human–robot interactions, remain critical. Addressing these challenges is vital to achieving the sustainable and socially responsible integration of autonomous forklifts into supply chain operations.

6.2. Challenges and Implications of Integrating Autonomous Technology into Forklifts

Margarita Martínez-Díaza and Francesc Soriguera [120] believe that autonomous vehicle manufacturers do not anticipate commercially releasing completely autonomous vehicles in the near future due to many considerations such as human behavior, ethics and traffic management. This has led to the use of these vehicles in less crowded and complex environments such as forklifts, etc. Furthermore, Martinez-Díaza and Soriguera claimed that technically, the clear “detection of obstacles at high speeds and long distances is one of the biggest difficulties” that require attention for the development of viable solutions.
Table 14 showcases selected publications addressing various challenges and the potential solutions that were listed.
Table 14. Selected publications on key challenges and proposed solutions.
Designing machine vision systems for autonomous forklifts involves numerous complex challenges critical for ensuring safe and efficient operation in industrial environments. One primary challenge is managing the diverse and dynamic conditions these forklifts encounter. They must function effectively under varied lighting conditions, including the bright lights of loading docks, shadows in warehouses and low-light environments. Additionally, they need to handle dynamic obstacles such as other forklifts, workers and varying types of inventory, accurately predicting their movements. Navigating from narrow aisles to open storage areas requires the vision system to understand different spatial configurations and traffic patterns thoroughly.
Accurate and fast object detection and classification are essential, demanding high-speed processing and advanced algorithms to minimize latency. The system must detect small and partially occluded objects, such as pallets, boxes or equipment partially hidden behind shelves, and recognize objects despite variations in appearance due to orientation, distance and environmental conditions. Robustness and reliability are critical, requiring effective sensor fusion that integrates data from multiple sensors like cameras, LiDAR and radar. This integration must be performed efficiently to enhance reliability and ensure that fail-safe mechanisms are in place to handle sensor malfunctions without compromising safety.
Real-time processing poses significant computational demands, requiring a balance between high performance and limitations in power and processing power. Efficient algorithms that process data quickly while maintaining accuracy are essential for the real-time operation of autonomous forklifts. Training machine learning models with large and diverse datasets ensures they are comprehensive and representative. Balancing simulation and real-world testing is vital for robust system development, as real-world validation accounts for unpredictable conditions not present in simulations.
Handling edge cases and long-tail problems is another challenge, as the system must effectively manage rare and unusual scenarios, such as navigating through unexpected obstructions or dealing with unusual inventory sizes and shapes. Continuous learning is needed for the system to adapt to new situations without overfitting or degrading performance in known scenarios. The research in [136] seeks to investigate people’s receptiveness towards autonomous vehicles by respecting their trust and sustainability concerns. This objective was accomplished by formulating the technology acceptance model (TAM). A questionnaire was administered to 391 participants. In 2021, ref. [137] presents the next challenge for autonomous driving in 2021, which is the technological equivalent of the space race of this century. The authors argue that a rethink is required and offer an alternative vision. Also, Sun Tang in [138] proposed a comprehensive review and introduction of simulator investigation, and user portrayal was utilized in this research to close the divide between the present and the future. Todd Litman in [139] examined the effects of autonomous vehicles and their implications for transportation planning. The study suggests Level 5 autonomous vehicles, capable of operating “without a driver, may be commercially available and legal for use in some jurisdictions by late 2020” and will only become popular when autonomous vehicles become widespread and affordable, possibly in the timeframe of 2040 to 2060, according to their expectation. In [140], the paper “explores differences in perceptions of AV safety across 33,958 individuals in 51 countries”. The master thesis written by Filip Hucko in [141] illustrates research examining the essential pillars of autonomous technology in general. It further addresses the “development of autonomous vehicles and their future implications in a sharing economy”. Finally, the authors raised their concerns in their home country of Japan, where major concerns about the aging workforce are a critical issue. The current techniques for pallet identification and positioning, which rely on a single source of data such as RGB images or point cloud, can result in inaccurate placement or require significant computational resources, increasing costs significantly.

6.3. Additional Considerations

Bundle loads with uneven shapes or protruding elements pose a significant hurdle for autonomous forklifts. These features can prevent the forklift from securely grasping the load, leading to instability during transport. This instability increases the risk of the load shifting or falling, potentially causing damage to the goods, surrounding infrastructure and injuring nearby personnel. Furthermore, irregular shapes can disrupt the forklift’s sensors, hindering its ability to accurately measure distances and navigate obstacles. Therefore, ensuring the safe handling of such loads is crucial. This necessitates advancements in sensor technology and more sophisticated algorithms to mitigate these risks and maintain the safety standards required for autonomous forklifts.
Autonomous vehicles require perceptual systems to understand their surroundings, which can be categorized into 2D and 3D perception. In the context of cars, 2D and 3D perception rely on cameras and sensors to capture visual information like pictures and video streams. Algorithms analyze these data to identify objects, interpret traffic signals and recognize pedestrians or other vehicles. This allows the vehicle to perceive the environment and take appropriate actions. However, forklifts operate in complex 3D environments like warehouses with racks, merchandise and other obstacles. In such scenarios, 2D perception alone is insufficient for safe and efficient navigation. Therefore, forklifts often utilize depth-sensing technologies like LiDAR or depth cameras to acquire 3D data about their surroundings. This enables accurate perception of object sizes, shapes and positions in three-dimensional space. While computer vision applications for cars primarily address two-dimensional challenges, forklifts face the additional complexity of navigating in three dimensions, exemplified by the Bird’s-Eye View (BEV) mapping paradigm used in automotive technology, which employs a 2D top–down perspective.
In all scenarios, additional information enhances the vehicle’s ability to make safe and efficient decisions. Whether the vehicle is a car or a forklift, more data enable a more comprehensive understanding of its surroundings and internal state, which is crucial for effective operation.
Among the various sensors employed in autonomous forklifts, LiDAR and cameras stand out as the most prevalent due to their comprehensive capabilities in mapping, navigation and object detection. LiDAR’s exceptional accuracy in generating 3D maps, coupled with cameras’ versatility in visual recognition tasks, makes them indispensable components in the sensor arrays of these autonomous vehicles.
Moreover, when considering sensor specifications, key factors include frame rate, which indicates the number of frames captured per second, often measured in frames per second (fps). Size refers to the physical dimensions of the sensor, encompassing overall size and pixel dimensions. Visibility pertains to the sensor’s ability to capture data in different lighting conditions or environments. The field of view describes the extent of the scene captured by the sensor. Resolution denotes the level of detail in the captured images. Weather effects indicate the sensor’s resilience in diverse weather conditions. Range signifies the distance over which the sensor can effectively operate. Depending on the type of sensor, these specifications can vary, ensuring optimal performance in specific applications.
As mentioned in [142,143], numerous leading manufacturers of autonomous forklifts commonly employ a diverse array of machine vision models, including convolutional neural networks, deep learning models, 3D vision and depth sensing models, semantic segmentation models, instance segmentation models, reinforcement learning models and optical character recognition (OCR) models. CNNs, such as YOLO, SSD and Faster R-CNN, are widely adopted for image recognition and object detection due to their real-time accuracy and speed. Deep learning frameworks like TensorFlow and PyTorch facilitate the development of custom models tailored to specific applications. For processing 3D point cloud data from LiDAR or stereo cameras, models such as PointNet, PointNet++ and VoxelNet are frequently utilized. Semantic segmentation models like U-Net, SegNet and DeepLab aid in image segmentation and environmental context understanding, while instance segmentation models like Mask R-CNN provide both object location and segmentation masks. Reinforcement learning models contribute to decision-making and navigation by learning optimal paths and actions through trial and error. Additionally, OCR models such as Tesseract and custom CNN-based OCR models are used to interpret text, such as labels and barcodes, within the forklift’s environment. YOLO and Faster R-CNN are particularly prevalent for their robust real-time object detection performance, which is crucial for the dynamic environments in which autonomous forklifts operate. Mask R-CNN is also commonly used due to its combined object detection and segmentation capabilities.
Current autonomous forklift technology, despite significant advancements, faces several hurdles on the road to widespread adoption. These challenges include navigating complex and dynamic environments like busy warehouses with human workers and moving equipment. Sensor limitations, particularly in poor lighting and with reflective surfaces, impair navigation and object detection accuracy. Integrating these forklifts into existing infrastructure can be expensive and requires substantial initial investment, posing a barrier for smaller businesses. Additionally, they lack flexibility, often needing significant reprogramming to adapt to new tasks or layout changes.
Safety and regulatory concerns are paramount. Proving the safety of human–robot interactions requires complex and time-consuming efforts. Reliability and maintenance remain ongoing challenges, as technical issues can disrupt operations. Interoperability with human workers presents another hurdle, necessitating sophisticated systems for seamless coordination. Data security and privacy are also critical concerns due to the vast amount of data generated and used. Scaling up from pilot projects to full-scale deployments requires careful planning to handle increased volume and complexity.
Addressing these limitations, shown in Table 15, necessitates ongoing research, development and collaboration among technology providers, warehouse operators and regulatory bodies. Advancements in sensor technology, improved decision-making capabilities for complex environments and increased adaptability to varied tasks are crucial. Additionally, reducing costs, improving safety protocols and establishing robust data security measures are essential for the wider adoption of autonomous forklifts.
Table 15. Critical limitations of current autonomous forklift technology.

7. Conclusions

In conclusion, the article provides a comprehensive overview of the current state-of-the-art solutions for autonomous forklifts, focusing on the crucial role of sensors, machine vision, object detection models and systems. By delving into the various sensor types, machine vision techniques and object detection models, the article highlights the key challenges and advancements in this rapidly evolving field. This knowledge serves as a valuable foundation for further research and development in autonomous forklifts, ultimately contributing to increased efficiency, safety and productivity within warehouse and logistics operations. Also, by providing an overview of the machine vision in autonomous forklifts, this article aims to stimulate further research and development efforts, thereby promoting technological advancement. Additionally, we examined the leading forklift manufacturers in the market, reviewing the specifications of each of them to understand their development to find out its path.
This research provides a valuable foundation for further research and development in autonomous forklifts, ultimately contributing to increased efficiency, safety and productivity in warehouse and logistics operations. The article specifically highlights the integration of advanced machine vision algorithms, such as YOLO and DETR, or other machine vision models to detect load types and ensure accurate spatial perception for efficient load handling. By providing an overview of machine vision applications in autonomous forklifts, this article aims to stimulate further research and development efforts, promoting technological advancement. Additionally, we examined leading forklift manufacturers, reviewing their specifications to understand their technological trajectory, particularly their incorporation of AI-driven solutions and how these developments align with industry needs for automation and precise handling in dynamic warehouse environments.

Author Contributions

M.A.F. was responsible for the primary writing and data preparation. D.T., J.C., P.T. and J.M. provided technical guidance and supported the writing of the methods section. G.D. and P.T. were responsible for revising, reviewing and editing the paper. Visualization was handled by J.M. and J.C. Reviewing, supervision and project administration were performed by D.T. Funding acquisition was carried out by J.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Lero—the Science Foundation Ireland Research Centre for Software (www.lero.ie) and Combilift under the project titled: APPS Autonomous Payload Perception Systems: A Technical Feasibility Exploration.

Data Availability Statement

Data is contained within the article.

Acknowledgments

We acknowledge the support of Lero—the Science Foundation Ireland Research Centre for Software— and Combilift for funding this project. The term Blended Autonomous Vehicles (BAV), coined by our team, underscores our emphasis on practical autonomous systems rather than full autonomy.

Conflicts of Interest

Author Joseph Coleman and James Maguire were employed by the company Combilift. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ToFTime of Flight Institute
LiDARLight Detection and Ranging
RadarRadio Detection and Ranging
FOVField of View
CNNConvolutional Neural Network
GNSSGlobal Navigation Satellite System
AGVAutomated Guided Vehicle
AVsAutonomous Vehicles
HMIHuman–Machine Interaction
IMUInertial Measurement Uni
RTKReal-Time Kinematic

References

  1. Kuutti, S.; Fallah, S.; Bowden, R.; Barber, P. Deep Learning for Autonomous Vehicle Control: Algorithms, State-of-the-Art, and Future Prospects; Morgan & Claypool Publishers: San Rafael, CA, USA, 2019. [Google Scholar]
  2. Ondruš, J.; Kolla, E.; Vertal’, P.; Šarić, Ž. How do autonomous cars work? In Transportation Research Procedia; Elsevier: Amsterdam, The Netherlands, 2020; Volume 44, pp. 226–233. [Google Scholar]
  3. Thorpe, C.; Hebert, M.H.; Kanade, T.; Shafer, S.A. Vision and navigation for the Carnegie-Mellon Navlab. IEEE Trans. Pattern Anal. Mach. Intell. 1988, 10, 362–373. [Google Scholar] [CrossRef]
  4. Kpmg, C.; Silberg, G.; Wallace, R.; Matuszak, G.; Plessers, J.; Brower, C.; Subramanian, D. Self-Driving Cars: The Next Revolution; Kpmg: Seattle, WA, USA, 2012. [Google Scholar]
  5. Tamba, T.A.; Hong, B.; Hong, K.S. A path following control of an unmanned autonomous forklift. Int. J. Control. Autom. Syst. 2009, 7, 113–122. [Google Scholar] [CrossRef]
  6. Widyotriatmo, A.; Hong, K.-S. Configuration control of an autonomous vehicle under nonholonomic and field-of-view constraints. Int. J. Imaging Robot. 2015, 15, 126–139. [Google Scholar]
  7. Mohammadi, A.; Mareels, I.; Oetomo, D. Model predictive motion control of autonomous forklift vehicles with dynamics balance constraint. In Proceedings of the 2016 14th International Conference on Control, Automation, Robotics and Vision (ICARCV), Phuket, Thailand, 13–15 November 2016; pp. 1–6. [Google Scholar]
  8. Widyotriatmo, A. Comparative study of stabilization controls of a forklift vehicle. Acta Mech. Autom. 2019, 13, 181–188. [Google Scholar] [CrossRef]
  9. Zhang, Z.; Xiao, B. Sensor fault diagnosis and fault tolerant control for forklift based on sliding mode theory. IEEE Access 2020, 8, 84858–84866. [Google Scholar] [CrossRef]
  10. Adam, N.; Aiman, M.; Nafis, W.M.; Irawan, A.; Muaz, M.; Hafiz, M.; Razali, A.R.; Ali, S.N.S. Omnidirectional configuration and control approach on mini heavy loaded forklift autonomous guided vehicle. In MATEC Web of Conferences; EDP Sciences: Les Ulis, France, 2017; Volume 90, p. 01077. [Google Scholar]
  11. Cui, G.; Lu, L.; He, Z.; Yao, L.; Yang, C.; Huang, B.; Hu, Z. A robust autonomous mobile forklift pallet recognition. In Proceedings of the 2010 2nd International Asia Conference on Informatics in Control, Automation and Robotics (CAR 2010), Wuhan, China, 6–7 March 2010; Volume 3, pp. 286–290. [Google Scholar]
  12. Draganjac, I.; Miklić, D.; Kovačić, Z.; Vasiljević, G.; Bogdan, S. Decentralized control of multi-AGV systems in autonomous warehousing applications. IEEE Trans. Autom. Sci. Eng. 2016, 13, 1433–1447. [Google Scholar] [CrossRef]
  13. López, J.; Zalama, E.; Gómez-García-Bermejo, J. A simulation and control framework for AGV based transport systems. Simul. Model. Pract. Theory 2022, 116, 102430. [Google Scholar] [CrossRef]
  14. Garibotto, G.; Masciangelo, S.; Bassino, P.; Coelho, C.; Pavan, A.; Marson, M. Industrial exploitation of computer vision in logistic automation: Autonomous control of an intelligent forklift truck. In Proceedings of the 1998 IEEE International Conference on Robotics and Automation (Cat. No. 98CH36146), Leuven, Belgium, 20–20 May 1998; Volume 2, pp. 1459–1464. [Google Scholar]
  15. Abdellatif, M.; Shoeir, M.; Talaat, O.; Gabalah, M.; Elbably, M.; Saleh, S. Design of an autonomous forklift using kinect. In MATEC Web of Conferences; EDP Sciences: Les Ulis, France, 2018; Volume 153, p. 04005. [Google Scholar]
  16. Girbés, V.; Armesto, L.; Tornero, J. Path following hybrid control for vehicle stability applied to industrial forklifts. Robot. Auton. Syst. 2014, 62, 910–922. [Google Scholar] [CrossRef]
  17. Song, Y.-H.; Park, J.-H.; Lee, K.-C.; Lee, S. Network-based distributed approach for implementation of an unmanned autonomous forklift. J. Inst. Control. Robot. Syst. 2010, 16, 898–904. [Google Scholar] [CrossRef]
  18. Bellomo, N.; Marcuzzi, E.; Baglivo, L.; Pertile, M.; Bertolazzi, E.; De Cecco, M. Pallet pose estimation with LIDAR and vision for autonomous forklifts. IFAC Proc. Vol. 2009, 42, 612–617. [Google Scholar] [CrossRef]
  19. Scheiner, N.; Kraus, F.; Appenrodt, N.; Dickmann, J.; Sick, B. Object detection for automotive radar point clouds—A comparison. AI Perspect. 2021, 3, 6. [Google Scholar] [CrossRef]
  20. Correa, A.; Walter, M.R.; Fletcher, L.; Glass, J.; Teller, S.; Davis, R. Multimodal interaction with an autonomous forklift. In Proceedings of the 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Osaka, Japan, 2–5 March 2010; pp. 243–250. [Google Scholar]
  21. Tuncali, C.E.; Fainekos, G.; Prokhorov, D.; Ito, H.; Kapinski, J. Requirements-driven test generation for autonomous vehicles with machine learning components. IEEE Trans. Intell. Veh. 2019, 5, 265–280. [Google Scholar] [CrossRef]
  22. Baglivo, L.; Biasi, N.; Biral, F.; Bellomo, N.; Bertolazzi, E.; De Cecco, M. Autonomous pallet localization and picking for industrial forklifts: A robust range and look method. Meas. Sci. Technol. 2011, 22, 085502. [Google Scholar] [CrossRef]
  23. Iinuma, R.; Kojima, Y.; Onoyama, H.; Fukao, T.; Hattori, S.; Nonogaki, Y. Pallet handling system with an autonomous forklift for outdoor fields. J. Robot. Mechatron. 2020, 32, 1071–1079. [Google Scholar] [CrossRef]
  24. Behrje, U.; Himstedt, M.; Maehle, E. An autonomous forklift with 3D time-of-flight camera-based localization and navigation. In Proceedings of the 2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV), Singapore, 18–21 November 2018; pp. 1739–1746. [Google Scholar]
  25. Changalvala, R.; Malik, H. LiDAR data integrity verification for autonomous vehicle. IEEE Access 2019, 7, 138018–138031. [Google Scholar] [CrossRef]
  26. Lynch, L.; Newe, T.; Clifford, J.; Coleman, J.; Walsh, J.; Toal, D. Automated ground vehicle (AGV) and sensor technologies—A review. In Proceedings of the 2018 12th International Conference on Sensing Technology (ICST), Limerick, Ireland, 4–6 December 2018; pp. 347–352. [Google Scholar]
  27. Kim, J.; Han, D.S.; Senouci, B. Radar and vision sensor fusion for object detection in autonomous vehicle surroundings. In Proceedings of the 2018 Tenth International Conference on Ubiquitous and Future Networks (ICUFN), Prague, Czech Republic, 3–6 July 2018; pp. 76–78. [Google Scholar]
  28. Feng, D.; Haase-Schütz, C.; Rosenbaum, L.; Hertlein, H.; Glaeser, C.; Timm, F.; Wiesbeck, W.; Dietmayer, K. Deep multi-modal object detection and semantic segmentation for autonomous driving: Datasets, methods, and challenges. IEEE Trans. Intell. Transp. Syst. 2020, 22, 1341–1360. [Google Scholar] [CrossRef]
  29. Jhong, S.-Y.; Chen, Y.-Y.; Hsia, C.-H.; Wang, Y.-Q.; Lai, C.-F. Density-Aware and Semantic-Guided Fusion for 3D Object Detection using LiDAR-Camera Sensors. IEEE Sens. J. 2023, 232, 22051–22063. [Google Scholar] [CrossRef]
  30. Estiri, F.A. 3D Object Detection and Tracking Based On Point Cloud Library: Special Application in Pallet Picking for Autonomous Mobile Machines. Master’s Thesis, Tampere University, Tampere, Finland, 2014. [Google Scholar]
  31. Wulf, O.; Lecking, D.; Wagner, B. Robust self-localization in industrial environments based on 3D ceiling structures. In Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 9–15 October 2006; pp. 1530–1534. [Google Scholar]
  32. Vasiljević, G.; Miklić, D.; Draganjac, I.; Kovačić, Z.; Lista, P. High-accuracy vehicle localization for autonomous warehousing. Robot. -Comput.-Integr. Manuf. 2016, 42, 1–16. [Google Scholar] [CrossRef]
  33. Costea, A.D.; Vatavu, A.; Nedevschi, S. Obstacle localization and recognition for autonomous forklifts using omnidirectional stereovision. In Proceedings of the 2015 IEEE Intelligent Vehicles Symposium (IV), Seoul, Republic of Korea, 28 June–1 July 2015; pp. 531–536. [Google Scholar]
  34. Stachniss, C. Robotic Mapping and Exploration; Springer: Berlin/Heidelberg, Germany, 2009; Volume 55. [Google Scholar]
  35. Wang, F.; Lü, E.; Wang, Y.; Qiu, G.; Lu, H. Efficient Stereo Visual Simultaneous Localization and Mapping for an Autonomous Unmanned Forklift in an Unstructured Warehouse. Appl. Sci. 2020, 10, 698. [Google Scholar] [CrossRef]
  36. Kocić, J.; Jovičić, N.; Drndarević, V. Sensors and sensor fusion in autonomous vehicles. In Proceedings of the 2018 26th Telecommunications Forum (TELFOR), Belgrade, Serbia, 20–21 November 2018; pp. 420–425. [Google Scholar]
  37. Aldjia, B.; Boussaad, L. Sensor Level Fusion for Multi-modal Biometric Identification using Deep Learning. In Proceedings of the 2021 International Conference on Recent Advances in Mathematics and Informatics (ICRAMI), Tebessa, Algeria, 21–22 September 2021; pp. 1–5. [Google Scholar]
  38. Ferreira, J.F.; Lobo, J.; Dias, J. Bayesian real-time perception algorithms on GPU: Real-time implementation of Bayesian models for multimodal perception using CUDA. J. Real-Time Image Process. 2011, 6, 171–186. [Google Scholar] [CrossRef]
  39. Zhang, Y.; Carballo, A.; Yang, H.; Takeda, K. Perception and sensing for autonomous vehicles under adverse weather conditions: A survey. ISPRS J. Photogramm. Remote. Sens. 2023, 196, 146–177. [Google Scholar] [CrossRef]
  40. Krug, R.; Stoyanov, T.; Tincani, V.; Andreasson, H.; Mosberger, R.; Fantoni, G.; Lilienthal, A.J. The next step in robot commissioning: Autonomous picking and palletizing. IEEE Robot. Autom. Lett. 2016, 1, 546–553. [Google Scholar] [CrossRef]
  41. Vivaldini, K.C.; Galdames, J.P.; Pasqual, T.B.; Becker, M.; Caurin, G.A. Intelligent Warehouses: Focus on the automatic routing and path planning of robotic forklifts able to work autonomously. Intell. Transp. Veh. 2011, 1, 115. [Google Scholar]
  42. Sankaran, P.; Li, M.P.; Kuhl, M.E.; Ptucha, R.; Ganguly, A.; Kwasinski, A. Simulation Analysis of a Highway DNN for Autonomous Forklift Dispatching. In Proceedings of the IIE Annual Conference. Proceedings, Seattle, DC, USA, 21–24 May 2022; Institute of Industrial and Systems Engineers (IISE): Peachtree Corners, GA, USA, 2019; pp. 432–437. [Google Scholar]
  43. Yuan, Y.; Zhen, L.; Wu, J.; Wang, X. Quantum behaved particle swarm optimization of inbound process in an automated warehouse. J. Oper. Res. Soc. 2022, 74, 2199–2214. [Google Scholar] [CrossRef]
  44. Fayyad, J.; Jaradat, M.A.; Gruyer, D.; Najjaran, H. Deep learning sensor fusion for autonomous vehicle perception and localization: A review. Sensors 2020, 20, 4220. [Google Scholar] [CrossRef]
  45. Wang, S.; Ye, A.; Guo, H.; Gu, J.; Wang, X.; Yuan, K. Autonomous Pallet Localization and Picking for Industrial Forklifts Based on the Line Structured Light. In Proceedings of the 2016 IEEE International Conference on Mechatronics and Automation, Harbin, China, 7–10 August 2016; IEEE: Piscataway, NJ, USA,, 2016; pp. 707–713. [Google Scholar]
  46. Chew, J.Y.; Kawamoto, M.; Okuma, T.; Yoshida, E.; Kato, N. Adaptive attention-based human machine interface system for teleoperation of industrial vehicle. Sci. Rep. 2021, 11, 17284. [Google Scholar] [CrossRef]
  47. Chew, J.Y.; Okayama, K.; Okuma, T.; Kawamoto, M.; Onda, H.; Kato, N. Development of a virtual environment to realize human-machine interaction of forklift operation. In Proceedings of the 2019 7th International Conference on Robot Intelligence Technology and Applications (RiTA), Daejeon, Republic of Korea, 1–3 November 2019; pp. 112–118. [Google Scholar]
  48. Divakarla, K.P.; Emadi, A.; Razavi, S.; Habibi, S.; Yan, F. A review of autonomous vehicle technology landscape. Int. J. Electr. Hybrid Veh. 2019, 11, 320–345. [Google Scholar] [CrossRef]
  49. Cardarelli, E.; Sabattini, L.; Digani, V.; Secchi, C.; Fantuzzi, C. Interacting with a multi AGV system. In Proceedings of the 2015 IEEE International Conference on Intelligent Computer Communication and Processing (ICCP), Cluj-Napoca, Romania, 3–5 September 2015; pp. 263–267. [Google Scholar]
  50. Vlachos, I.; Pascazzi, R.M.; Ntotis, M.; Spanaki, K.; Despoudi, S.; Repoussis, P. Smart and flexible manufacturing systems using Autonomous Guided Vehicles (AGVs) and the Internet of Things (IoT). Int. J. Prod. Res. 2022, 62, 5574–5595. [Google Scholar] [CrossRef]
  51. Friebel, V. Usability Criteria for Human-Machine Interaction with Automated Guided Vehicles: An Exploratory Study on User Perceptions. Independent Thesis Advanced Level (Degree of Master), Uppsala University, Uppsala, Sweden, 2022. [Google Scholar]
  52. Matute, J.A.; Diaz, S.; Zubizarreta, A.; Karimoddini, A.; Perez, J. An Approach to Global and Behavioral Planning for Automated Forklifts in Structured Environments. In Proceedings of the 2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC), Macau, China, 8–12 October 2022; pp. 3423–3428. [Google Scholar]
  53. Grant, W.S.; Voorhies, R.C.; Itti, L. Efficient Velodyne SLAM with point and plane features. Auton. Robot. 2019, 43, 1207–1224. [Google Scholar] [CrossRef]
  54. Ishigooka, T.; Yamada, H.; Otsuka, S.; Kanekawa, N.; Takahashi, J. Symbiotic safety: Safe and efficient human-machine collaboration by utilizing rules. In Proceedings of the 2022 Design, Automation & Test in Europe Conference & Exhibition (DATE), Antwerp, Belgium, 14–23 March 2022; pp. 280–281. [Google Scholar]
  55. Drabek, C.; Kosmalska, A.; Weiss, G.; Ishigooka, T.; Otsuka, S.; Mizuochi, M. Safe interaction of automated forklifts and humans at blind corners in a warehouse with infrastructure sensors. In Proceedings of the Computer Safety, Reliability, and Security: 40th International Conference, SAFECOMP 2021, York, UK, 8–10 September 2021; Proceedings 40. Springer: Berlin/Heidelberg, Germany, 2021; pp. 163–177. [Google Scholar]
  56. Tews, A. Safe and Dependable Operation of a Large Industrial Autonomous Forklift; Citeseer: Princeton, NJ, USA, 2009. [Google Scholar]
  57. Lam, J.S.L.; Chen, Z.S. Survey of economic, energy and environmental aspects of cargo handling equipment in container ports. In Proceedings of the 2021 6th International Conference on Transportation Information and Safety (ICTIS), Wuhan, China, 22–24 October 2021; pp. 1357–1363. [Google Scholar]
  58. Kayikci, Y. Sustainability impact of digitization in logistics. Procedia Manuf. 2018, 21, 782–789. [Google Scholar] [CrossRef]
  59. Xie, L. Mechatronics Design and Energy-Efficient Navigation of a Heavy-Duty Omni-Directional Mecanum Autonomous Mobile Robot. Ph.D. Dissertation, ResearchSpace@ Auckland, Auckland, New Zealand, 2018. [Google Scholar]
  60. Iris, Ç.; Lam, J.S.L. A review of energy efficiency in ports: Operational strategies, technologies and energy management systems. Renew. Sustain. Energy Rev. 2019, 112, 170–182. [Google Scholar] [CrossRef]
  61. lOTTO Lifter|Autonomous Forklift|OTTO by Rockwell Automation—ottomotors.com. Available online: https://ottomotors.com/lifter/ (accessed on 4 June 2024).
  62. Royo, S.; Ballesta-Garcia, M. An overview of lidar imaging systems for autonomous vehicles. Appl. Sci. 2019, 9, 4093. [Google Scholar] [CrossRef]
  63. Ahmad, N.; Ghazilla, R.A.R.; Khairi, N.M.; Kasi, V. Reviews on various inertial measurement unit (IMU) sensor applications. Int. J. Signal Process. Syst. 2013, 1, 256–262. [Google Scholar] [CrossRef]
  64. Seel, T.; Raisch, J.; Schauer, T. IMU-based joint angle measurement for gait analysis. Sensors 2014, 14, 6891–6909. [Google Scholar] [CrossRef] [PubMed]
  65. Joubert, N.; Reid, T.G.R.; Noble, F. Developments in modern GNSS and its impact on autonomous vehicle architectures. In Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 19 October–13 November 2020; pp. 2029–2036. [Google Scholar]
  66. Schütz, A.; Bochkati, M.; Maier, D.; Pany, T. Closed-loop GNSS/INS simulation chain with RTK-accuracy for sensor fusion algorithm verification. In Proceedings of the 33rd International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2020), Online, 21–25 September 2020; pp. 2867–2877. [Google Scholar]
  67. RTK NAVIGATION Technologies for Autonomous Vehicles—microcontrollertips.com. Available online: https://www.microcontrollertips.com/rtk-navigation-technologies-for-autonomous-vehicles-faq/ (accessed on 11 June 2024).
  68. Tradacete, M.; Sáez, Á.; Arango, J.F.; Gómez Huélamo, C.; Revenga, P.; Barea, R.; López-Guillén, E.; Bergasa, L.M. Positioning system for an electric autonomous vehicle based on the fusion of multi-GNSS RTK and odometry by using an extended Kalman filter. In Advances in Physical Agents: Proceedings of the 19th International Workshop of Physical Agents (WAF 2018), Madrid, Spain, 22–23 November 2018; Springer: Berlin/Heidelberg, Germany, 2019; pp. 16–30. [Google Scholar]
  69. Ng, K.M.; Johari, J.; Abdullah, S.A.C.; Ahmad, A.; Laja, B.N. Performance evaluation of the RTK-GNSS navigating under different landscape. In Proceedings of the 2018 18th International Conference on Control, Automation and Systems (ICCAS), PyeongChang, Republic of Korea, 17–20 October 2018; pp. 1424–1428. [Google Scholar]
  70. Bartknecht, F.; Siegfried, M.; Weber, H. Sensors solutions and predictive maintenance tools to decrease kiln and conveyor belt downtime. In Proceedings of the 2019 IEEE-IAS/PCA Cement Industry Conference (IAS/PCA), St. Louis, MO, USA, 28 April–2 May 2019; pp. 1–9. [Google Scholar]
  71. Kuutti, S.; Bowden, R.; Jin, Y.; Barber, P.; Fallah, S. A survey of deep learning applications to autonomous vehicle control. IEEE Trans. Intell. Transp. Syst. 2020, 22, 712–733. [Google Scholar] [CrossRef]
  72. Yeong, D.J.; Velasco-Hernandez, G.; Barry, J.; Walsh, J. Sensor and sensor fusion technology in autonomous vehicles: A review. Sensors 2021, 21, 2140. [Google Scholar] [CrossRef] [PubMed]
  73. Hopkins, D.; Schwanen, T. Talking about automated vehicles: What do levels of automation do? Technol. Soc. 2021, 64, 101488. [Google Scholar] [CrossRef]
  74. Barnes, L. Practical Pallet Engagement with an Autonomous Forklift. Ph.D. Dissertation, ResearchSpace@ Auckland, Auckland, New Zealand, 2022. [Google Scholar]
  75. Kovačić, Z.; Vasiljević, G.; Draganjac, I.; Petrović, T.; Oršulić, J.; Bogdan, S.; Miklić, D.; Kokot, M. Autonomous Vehicles and Automated Warehousing Systems for Industry 4.0. Eng. Power: Bull. Croat. Acad. Eng. 2019, 14, 17–23. [Google Scholar]
  76. Rosique, F.; Navarro, P.J.; Fernández, C.; Padilla, A. A systematic review of perception system and simulators for autonomous vehicles research. Sensors 2019, 19, 648. [Google Scholar] [CrossRef]
  77. Mohamed, I.S.; Capitanelli, A.; Mastrogiovanni, F.; Rovetta, S.; Zaccaria, R. Detection, localisation and tracking of pallets using machine learning techniques and 2D range data. Neural Comput. Appl. 2020, 32, 8811–8828. [Google Scholar] [CrossRef]
  78. Deng, J.; Hei, L. Cornernet: Detecting objects as paired keypoints. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 10 September 2018; pp. 734–750. [Google Scholar]
  79. Redmon, J. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  80. Juyal, A.; Sharma, S.; Matta, P. Deep learning methods for object detection in autonomous vehicles. In Proceedings of the 2021 5th International Conference on Trends in Electronics and Informatics (ICOEI), Tirunelveli, India, 3–5 June 2021; pp. 751–755. [Google Scholar]
  81. Ultralytics. YOLOv9. Available online: https://docs.ultralytics.com/models/yolov9/ (accessed on 22 December 2024).
  82. Hussain, M. YOLO-v1 to YOLO-v8, the Rise of YOLO and Its Complementary Nature toward Digital Manufacturing and Industrial Defect Detection. Machines 2023, 11, 677. [Google Scholar] [CrossRef]
  83. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. SSD: Single Shot Multibox Detector. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Proceedings, Part I, Amsterdam, The Netherlands, 11–14 October 2016. [Google Scholar]
  84. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar]
  85. Tarmizi, I.A.; Abd Aziz, A. Vehicle detection using convolutional neural network for autonomous vehicles. In Proceedings of the 2018 International Conference on Intelligent and Advanced System (ICIAS), Kuala Lumpur, Malaysia, 13–14 August 2018; pp. 1–5. [Google Scholar]
  86. Bechtel, M.G.; McEllhiney, E.; Kim, M.; Yun, H. Deeppicar: A low-cost deep neural network-based autonomous car. In Proceedings of the 2018 IEEE 24th International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA), Hakodate, Japan, 28–31 August 2018; pp. 11–21. [Google Scholar]
  87. Dreossi, T.; Ghosh, S.; Sangiovanni-Vincentelli, A.; Seshia, S.A. Systematic testing of convolutional neural networks for autonomous driving. arXiv 2017, arXiv:1708.03309. [Google Scholar]
  88. Tian, Y.; Pei, K.; Jana, S.; Ray, B. Deeptest: Automated testing of deep-neural-network-driven autonomous cars. In Proceedings of the 40th International Conference on Software Engineering, Gothenburg, Sweden, 27 May–3 June 2018; pp. 303–314. [Google Scholar]
  89. Suong, L.K.; Jangwoo, K. Detection of potholes using a deep convolutional neural network. J. Univers. Comput. Sci. 2018, 24, 1244–1257. [Google Scholar]
  90. Ćorović, A.; Ilić, V.; Đurić, S.; Mališ, M.; Pavković, B. The real-time detection of traffic participants using YOLO algorithm. In Proceedings of the 2018 26th Telecommunications Forum (TELFOR), Belgrade, Serbia, 20–21 November 2018; pp. 1–4. [Google Scholar]
  91. Laroca, R.; Severo, E.; Zanlorensi, L.A.; Oliveira, L.S.; Gonçalves, G.R.; Schwartz, W.R.; Menotti, D. A robust real-time automatic license plate recognition based on the YOLO detector. In Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–10. [Google Scholar]
  92. Han, J.; Liao, Y.; Zhang, J.; Wang, S.; Li, S. Target fusion detection of LiDAR and camera based on the improved YOLO algorithm. Mathematics 2018, 6, 213. [Google Scholar] [CrossRef]
  93. Sun, Y.; Su, T.; Tu, Z. Faster R-CNN based autonomous navigation for vehicles in warehouse. In Proceedings of the 2017 IEEE International Conference on Advanced Intelligent Mechatronics (AIM), Munich, Germany, 3–7 July 2017; pp. 1639–1644. [Google Scholar]
  94. Shi, K.; Bao, H.; Ma, N. Forward vehicle detection based on incremental learning and fast R-CNN. In Proceedings of the 2017 13th International Conference on Computational Intelligence and Security (CIS), Hong Kong, China, 15–18 December 2017; pp. 73–76. [Google Scholar]
  95. Kato, S.; Takeuchi, E.; Ishiguro, Y.; Ninomiya, Y.; Takeda, K.; Hamada, T. An open approach to autonomous vehicles. IEEE Micro 2015, 35, 60–68. [Google Scholar] [CrossRef]
  96. Raffo, G.V.; Gomes, G.K.; Normey-Rico, J.E.; Kelber, C.R.; Becker, L.B. A predictive controller for autonomous vehicle path tracking. IEEE Trans. Intell. Transp. Syst. 2009, 10, 92–102. [Google Scholar] [CrossRef]
  97. Olgun, M.C.; Baytar, Z.; Akpolat, K.M.; Sahingoz, O.K. Autonomous vehicle control for lane and vehicle tracking by using deep learning via vision. In Proceedings of the 2018 6th International Conference on Control Engineering & Information Technology (CEIT), Istanbul, Turkey, 25–27 October 2018; pp. 1–7. [Google Scholar]
  98. Azam, S.; Munir, F.; Rafique, A.; Ko, Y.M.; Sheri, A.M.; Jeon, M. Object modeling from 3D point cloud data for self-driving vehicles. Proceedings of 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 26–30 June 2018; pp. 409–414. [Google Scholar]
  99. Qi, C.R.; Liu, W.; Wu, C.; Su, H.; Guibas, L.J. Frustum PointNets for 3D object detection from RGB-D data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 918–927. [Google Scholar]
  100. Ha, Q.; Watanabe, K.; Karasawa, T.; Ushiku, Y.; Harada, T. MFNet: Towards real-time semantic segmentation for autonomous vehicles with multi-spectral scenes. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 5108–5115. [Google Scholar]
  101. Chen, B.; Gong, C.; Yang, J. Importance-aware semantic segmentation for autonomous vehicles. IEEE Trans. Intell. Transp. Syst. 2018, 20, 137–148. [Google Scholar] [CrossRef]
  102. Treml, M.; Arjona-Medina, J.; Unterthiner, T.; Durgesh, R.; Friedmann, F.; Schuberth, P.; Mayr, A.; Heusel, M.; Hofmarcher, M.; Widrich, M.; et al. Speeding up Semantic Segmentation for Autonomous Driving. Available online: https://openreview.net/pdf?id=S1uHiFyyg (accessed on 22 December 2024).
  103. Siam, M.; Elkerdawy, S.; Jagersand, M.; Yogamani, S. Deep semantic segmentation for automated driving: Taxonomy, roadmap and challenges. In Proceedings of the 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC)lligent Transportation Systems (ITSC), Yokohama, Japan, 16–19 October 2017; pp. 1–8. [Google Scholar]
  104. Koopman, P.; Wagner, M. Challenges in autonomous vehicle testing and validation. Sae Int. J. Transp. Saf. 2016, 4, 15–24. [Google Scholar] [CrossRef]
  105. Hanif, A.N.B.; Al-Humairi, S.N.S.; Daud, R.J. IoT-based: Design an autonomous bus with QR code communication system. In Proceedings of the 2021 IEEE International Conference on Automatic Control & Intelligent Systems (I2CACIS), Shah Alam, Malaysia, 26 June 2021; pp. 225–230. [Google Scholar]
  106. Ozan, E. QR Code Based Signage to Support Automated Driving Systems on Rural Area Roads. In Industrial Engineering and Operations Management II: XXIV IJCIEOM, Lisbon, Portugal, 18–20 July 2024; Springer: Berlin/Heidelberg, Germany, 2019; pp. 109–116. [Google Scholar]
  107. Rajesh, K.; Waranalatha, S.S.; Reddy, K.V.M.; Supraja, M. QR code-based real-time vehicle tracking in indoor parking structures. In Proceedings of the 2018 Second International Conference on Intelligent Computing and Control Systems (ICICCS), Madurai, India, 14–15 June 2018; pp. 11–16. [Google Scholar]
  108. Ang, J.L.F.; Lee, W.K.; Ooi, B.Y.; Ooi, T.W.M. Location Sensing using QR codes via 2D camera for Automated Guided Vehicles. In Proceedings of the 2020 IEEE Sensors Applications Symposium (SAS), Kuala Lumpur, Malaysia, 9–11 March 2020; pp. 1–6. [Google Scholar]
  109. Abid, S.; Hayat, B.; Shafique, S.; Ali, Z.; Ahmed, B.; Riaz, F.; Sung, T.-E.; Kim, K.-I. A Robust QR and Computer Vision-Based Sensorless Steering Angle Control, Localization, and Motion Planning of Self-Driving Vehicles. IEEE Access 2021, 9, 151766–151774. [Google Scholar] [CrossRef]
  110. Niu, G.; Yang, Q.; Gao, Y.; Pun, M.-O. Vision-based autonomous landing for unmanned aerial and ground vehicles cooperative systems. IEEE Robot. Autom. Lett. 2021, 7, 6234–6241. [Google Scholar] [CrossRef]
  111. Santos, P.; Santos, M.; Trslic, P.; Omerdic, E.; Toal, D.; Dooly, G. Autonomous tracking system of a moving target for underwater operations of work-class ROVs. In Proceedings of the OCEANS 2021: San Diego–Porto, San Diego, CA, USA, 20–23 September 2021; pp. 1–6. [Google Scholar]
  112. Lebedev, I.; Erashov, A.; Shabanova, A. Accurate autonomous UAV landing using vision-based detection of ArUco marker. In International Conference on Interactive Collaborative Robotics; Springer: Berlin/Heidelberg, Germany, 2020; pp. 179–188. [Google Scholar]
  113. Blachut, K.; Danilowicz, M.; Szolc, H.; Wasala, M.; Kryjak, T.; Komorkiewicz, M. Automotive perception system evaluation with reference data from a UAV’s camera using ArUco markers and DCNN. J. Signal Process. Syst. 2022, 94, 675–692. [Google Scholar] [CrossRef]
  114. Volden, Ø.; Stahl, A.; Fossen, T.I. Vision-based positioning system for auto-docking of unmanned surface vehicles (USVs). Int. J. Intell. Robot. Appl. 2022, 6, 86–103. [Google Scholar] [CrossRef]
  115. Ibrahim, M.; Ullah, H.; Rasheed, A. Vision-based Autonomous Tracking Control of Unmanned Aerial Vehicle. In Proceedings of the 2020 IEEE 23rd International Multitopic Conference (INMIC), Bahawalpur, Pakistan, 5–7 November 2020; pp. 1–6. [Google Scholar]
  116. Morales, J.; Castelo, I.; Serra, R.; Lima, P.U.; Basiri, M. Vision-based autonomous following of a moving platform and landing for an unmanned aerial vehicle. Sensors 2023, 23, 829. [Google Scholar] [CrossRef] [PubMed]
  117. Irfan, M.; Dalai, S.; Kishore, K.; Singh, S.; Akbar, S.A. Vision-based guidance and navigation for autonomous MAV in indoor environment. In Proceedings of the 2020 11th International Conference on Computing, Communication and Networking Technologies (ICCCNT), Kharagpur, India, 1–3 July 2020; pp. 1–5. [Google Scholar]
  118. Robotnik Automation. What Is the Difference Between AGVs vs AMRs? Available online: https://robotnik.eu/what-is-the-difference-between-agvs-vs-amr/ (accessed on 9 December 2024).
  119. Goga, A.-S.; Toth, Z.; Meclea, M.-A.; Puiu, I.-R.; Boșcoianu, M. The Proliferation of Artificial Intelligence in the Forklift Industry—An Analysis for the Case of Romania. Sustainability 2024, 16, 9306. [Google Scholar] [CrossRef]
  120. Martínez-Díaz, M.; Soriguera, F. Autonomous vehicles: Theoretical and practical challenges. Transp. Res. Procedia 2018, 33, 275–282. [Google Scholar] [CrossRef]
  121. Zang, S.; Ding, M.; Smith, D.; Tyler, P.; Rakotoarivelo, T.; Kaafar, M.A. The impact of adverse weather conditions on autonomous vehicles: How rain, snow, fog, and hail affect the performance of a self-driving car. IEEE Veh. Technol. Mag. 2019, 14, 103–111. [Google Scholar] [CrossRef]
  122. Vargas, J.; Alsweiss, S.; Toker, O.; Razdan, R.; Santos, J. An overview of autonomous vehicles sensors and their vulnerability to weather conditions. Sensors 2021, 21, 5397. [Google Scholar] [CrossRef] [PubMed]
  123. Al-Haija, Q.A.; Gharaibeh, M.; Odeh, A. Detection in adverse weather conditions for autonomous vehicles via deep learning. AI 2022, 3, 303–317. [Google Scholar] [CrossRef]
  124. Hou, G. Evaluating efficiency and safety of mixed traffic with connected and autonomous vehicles in adverse weather. Sustainability 2023, 15, 3138. [Google Scholar] [CrossRef]
  125. Balasubramaniam, A.; Pasricha, S. Object detection in autonomous vehicles: Status and open challenges. arXiv 2022, arXiv:2201.07706. [Google Scholar]
  126. Uçar, A.; Demir, Y.; Güzeliş, C. Object recognition and detection with deep learning for autonomous driving applications. Simulation 2017, 93, 759–769. [Google Scholar] [CrossRef]
  127. Ravindran, R.; Santora, M.J.; Jamali, M.M. Multi-object detection and tracking, based on DNN, for autonomous vehicles: A review. IEEE Sensors J. 2020, 21, 5668–5677. [Google Scholar] [CrossRef]
  128. Li, X.; Teng, S.; Liu, B.; Dai, X.; Na, X.; Wang, F.-Y. Advanced scenario generation for calibration and verification of autonomous vehicles. IEEE Trans. Intell. Veh. 2023, 8, 3211–3216. [Google Scholar] [CrossRef]
  129. Wagner, P.; Buisson, C.; Nippold, R. Challenges in applying calibration methods to stochastic traffic models. Transp. Res. Rec. 2016, 2560, 10–16. [Google Scholar] [CrossRef]
  130. Valentine, D.C.; Smit, I.; Kim, E. Designing for calibrated trust: Exploring the challenges in calibrating trust between users and autonomous vehicles. Proc. Des. Soc. 2021, 1, 1143–1152. [Google Scholar] [CrossRef]
  131. Philip, B.V.; Alpcan, T.; Jin, J.; Palaniswami, M. Distributed real-time IoT for autonomous vehicles. IEEE Trans. Ind. Inform. 2018, 15, 1131–1140. [Google Scholar] [CrossRef]
  132. Liu, S.; Liu, L.; Tang, J.; Yu, B.; Wang, Y.; Shi, W. Edge computing for autonomous driving: Opportunities and challenges. Proc. IEEE 2019, 107, 1697–1716. [Google Scholar] [CrossRef]
  133. Bounini, F.; Gingras, D.; Lapointe, V.; Pollart, H. Autonomous vehicle and real time road lanes detection and tracking. In Proceedings of the 2015 IEEE Vehicle Power and Propulsion Conference (VPPC), Montreal, QC, Canada, 19–22 October 2015; pp. 1–6. [Google Scholar]
  134. Parekh, D.; Poddar, N.; Rajpurkar, A.; Chahal, M.; Kumar, N.; Joshi, G.P.; Cho, W. A review on autonomous vehicles: Progress, methods and challenges. Electronics 2022, 11, 2162. [Google Scholar] [CrossRef]
  135. Rapp, J.; Tachella, J.; Altmann, Y.; McLaughlin, S.; Goyal, V.K. Advances in single-photon lidar for autonomous vehicles: Working principles, challenges, and recent advances. IEEE Signal Process. Mag. 2020, 37, 62–71. [Google Scholar] [CrossRef]
  136. Dirsehan, T.; Can, C. Examination of trust and sustainability concerns in autonomous vehicle adoption. Technol. Soc. 2020, 63, 101361. [Google Scholar] [CrossRef]
  137. Hawke, J.; Badrinarayanan, V.; Kendall, A. Reimagining an autonomous vehicle. arXiv 2021, arXiv:2108.05805. [Google Scholar]
  138. Tang, P.; Sun, X.; Cao, S. Investigating user activities and the corresponding requirements for information and functions in autonomous vehicles of the future. Int. J. Ind. Ergon. 2020, 80, 103044. [Google Scholar] [CrossRef]
  139. Litman, T. Autonomous Vehicle Implementation Predictions: Implications for Transport Planning; Transport Policy Institute: Victoria, British Columbia, Canada, 2020; Available online: https://www.vtpi.org/avip.pdf (accessed on 22 December 2024).
  140. Moody, J.; Bailey, N.; Zhao, J. Public perceptions of autonomous vehicle safety: An international comparison. Saf. Sci. 2020, 121, 634–650. [Google Scholar] [CrossRef]
  141. Hucko, F. The Development of Autonomous Vehicles. Master’s Thesis, Aalborg University Copenhagen, Copenhagen, Denmark, 2017. [Google Scholar]
  142. VisionNav Robotics. Automated Self-Driving Forklifts: Revolutionizing Warehousing and Logistics. VisionNav Robotics. 2023. Available online: https://www.visionnav.com/news/detail/automated-self-driving-forklifts (accessed on 17 December 2024).
  143. Zivid. Zivid: High-Precision 3D Vision for Robotics and Automation. 2023. Available online: https://www.zivid.com. (accessed on 17 December 2024).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.