Next Article in Journal
Prediction of SOx-NOx Emission in Coal-Fired Power Plant Using Deep Neural Network
Previous Article in Journal
Analysis and Design of Small-Impact Magnetoelectric Generator
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Augmented Reality-Assisted Disassembly Approach for End-of-Life Vehicle Power Batteries

College of Mechanical Engineering, Donghua University, Shanghai 201620, China
*
Author to whom correspondence should be addressed.
Machines 2023, 11(12), 1041; https://doi.org/10.3390/machines11121041
Submission received: 16 July 2023 / Revised: 28 October 2023 / Accepted: 8 November 2023 / Published: 22 November 2023
(This article belongs to the Section Machine Design and Theory)

Abstract

:
The rapid expansion of the global electric vehicle industry has presented significant challenges in the management of end-of-life power batteries. Retired power batteries contain valuable resources, such as lithium, cobalt, nickel, and other metals, which can be recycled and reused in various applications. The existing disassembly processes rely on manual operations that are time-consuming, labour-intensive, and prone to errors. This research proposes an intelligent augmented reality (AR)-assisted disassembly approach that aims to increase disassembly efficiency by providing scene awareness and visual guidance to operators in real-time. The approach starts by employing a deep learning-based instance segmentation method to process the Red-Green-Blue-Dept (RGB-D) data of the disassembly scene. The segmentation method segments the disassembly object instances and reconstructs their point cloud representation, given the corresponding depth information obtained from the instance masks. In addition, to estimate the pose of the disassembly target in the scene and assess their disassembly status, an iterative closed point algorithm is used to align the segmented point cloud instances with the actual disassembly objects. The acquired information is then utilised for the generation of AR instructions, decreasing the need for frequent user interaction during the disassembly processes. To verify the feasibility of the AR-assisted disassembly system, experiments were conducted on end-of-life vehicle power batteries. The results demonstrated that this approach significantly enhanced disassembly efficiency and decreased the frequency of disassembly errors. Consequently, the findings indicate that the proposed approach is effective and holds promise for large-scale industrial recycling and disassembly operations.

1. Introduction

The drive to reduce emissions and minimise resource consumption has propelled car manufacturers to make significant advancements in the electric vehicle (EV) market [1]. With the rapid development of the electric vehicle industry in recent years, the phase of end-of-life power batteries is approaching on a large scale. It is estimated that nearly 10% of global vehicle sales were electric ones in 2021, which is four times the market share in 2019 [2]. The management of end-of-life power batteries is a complex task due to the presence of various substances that require different treatment approaches. Valuable materials, such as metallic materials, need to be recovered, while hazardous materials, including heavy metals and organic electrolytes, require appropriate treatment. Inadequate recycling and management practices can result in resource waste and severe environmental pollution. However, power batteries have diverse sources and come in a wide range of types. They are manufactured through different processes and subjected to varied service conditions. Once retired, the disassembly process involves non-linear parameter drift, deviating from the original assembly process. Consequently, it cannot be regarded as a simple reversal of the assembly or manufacturing process. Achieving large-scale and automated disassembly is a challenging task. Therefore, this research proposes an intelligent disassembly approach that can adapt to various scenarios, enabling the standardised and large-scale management of retired electric vehicle power batteries. This approach aims to enhance the flexibility, reliability, and efficiency of disassembling and recycling power lithium batteries.
In general, disassembly is seen as the inverse process of assembly. It holds the greatest potential for achieving intelligent and adaptive operation for various types of target products. The execution of disassembly can be carried out either at a single workstation, or in a unit or a production line [3]. Disassembly workstations have flexible characteristics and are suitable for a small batch scenario. Units and production lines are mainly designed for large-scale operations where targets are identical. In addition, there is a substantial shift towards the development of fully autonomous robot cells for disassembly and the enhancement of human–robot collaboration conditions and processes in the disassembly domain. A growing emphasis on the algorithmic optimisation of sequence planning, including the application of AI strategies, promises to bolster the feasibility of automated robotic disassembly [4]. For example, a matheuristics approach was proposed for automating bin packing, which in turn facilitated cost-effective logistics [5]. The disassembly process for retired vehicle power batteries needs to be adapted to the various characteristics of multiple brands and types, and their diverse service statuses, which impose significant challenges to the end-of-life management industry. Moreover, when facing different batches of vehicle power batteries, there is insufficient information regarding their manufacturing details or historical disassembly experiences. The diverse assembly and manufacturing approaches of power batteries make it difficult to design and develop an intelligent and automated disassembly line efficiently. Hence, a significant challenge lies in the limitations of current industrial disassembly systems, which may encompass semi-automated disassembly tools/robots and computer vision systems, particularly in their limited flexibility and adaptability when dealing with diverse power battery products. In response, the integration of mobile and wearable Augmented Reality (AR) systems emerges as a promising solution to enhance the intelligence and automation of disassembly processes within industrial settings.
Nowadays, the Industrial 4.0 concept revolutionises the industrial landscape by digitizing production processes, introducing automation, and integrating production sites into extensive supply chains. To achieve this, cutting-edge technologies like the Internet of Things (IoT), Big Data, Augmented Reality (AR), and others are combined with established principles and techniques from traditional industrial production [6]. Augmented Reality (AR) is a visualisation technology that overlays virtual objects onto the physical environment, playing a crucial role in Industry 4.0 smart manufacturing [7]. The mobility and intuitive visualisation features of AR-assisted systems offer the potential for an intelligent disassembly system for retired vehicle power batteries. AR techniques have been applied in assembly simulation and planning [8], maintenance [9], manufacturing, and quality assessment [10]. However, manual calibration and comparison may require significant user attention and effort, and traditional image segmentation algorithms may struggle in complex scenarios, specifically in the assembly process [11]. For example, an AR technique was applied to remote planning and the control of drones in indoor environments, enabling engineers to design and transmit sequences of actions wirelessly to drones, improving efficiency and reducing the need for human intervention [12]. In addition, an Augmented Repair Training Application (ARTA) was designed and developed to simplify AR content creation for end-users, particularly those on the shop floor. The approach was then assessed through an industrial case study, conducted in partnership with a small enterprise specialising in the Used and Waste Electronic and Electrical Equipment sector (UEEE/WEEE) [13].
This research aims to address the limitations of existing disassembly methods for retired vehicle power batteries, such as weak scene perception and lacking disassembly assistance. To overcome these challenges, this paper integrates AR technology and a deep learning algorithm to assist operators in disassembly tasks. The proposed method leverages an efficient scene recognition algorithm to understand the context of the current disassembly state. Then, the AR-assisted disassembly instructions are generated automatically, which would reduce the disassembly time and enhance the flexibility of the disassembly system. The main contributions of this study are summarised as follows:
(1)
The design of an instance segmentation-based AR approach for disassembly scenes, which improves the scene perception capability of the AR-assisted disassembly system by identifying and segmenting each stage of the disassembly task.
(2)
The analysis of the AR-assisted disassembly approach from the perspective of scene awareness and AR-aided guidance. The proposed approach enables the automatic updating of disassembly instructions, improving disassembly efficiency and reducing operational burden on workers.
(3)
The feasibility of the proposed method is validated through a prototype system in case study products for industrial application. Experiments are carried out in various practical scenarios.
The rest of this article is organised as follows: Section 2 presents a review of related research. The proposed AR-assisted disassembly approach is described in Section 3, where the proposed instance segmentation and deep learning model are presented in detail. Section 4 compares the proposed approach with the traditional disassembly process to evaluate the performance with comparative results. Section 5 concludes this article and discusses future work.

2. Related Work

This section summarises the fundamentals of the AR-assisted disassembly approach and provides a review of the recent developments regarding the implementation of the AR technique and deep learning approach in industrial applications.

2.1. AR and Its Applications in Manufacturing

Augmented Reality (AR) technologies are being increasingly utilised to facilitate various aspects of production, such as visual guidance, interactive collaboration, and information management. These technologies leverage digital information to enhance the assembly process, inspection procedures, quality evaluation, and other manufacturing tasks.
The 3D object registration approaches were used for the design of a guidance system which enables the rendering of assembly planning, inspection status, and quality evaluation on a model of cabin parts within an AR environment. By overlaying digital information onto physical objects, operators can receive visual guidance and real-time feedback, improving their efficiency and accuracy during the assembly process [14]. In addition, an AR guidance system was developed to enhance the interaction between human cognition and manual operation. By considering the users’ actions and providing relevant information, the system aims to improve the operators’ understanding and performance during tasks [15]. Cardoso et al. collaborated with Embraer to develop a mobile AR application for fuselage assembly. While the application provides intuitive assembly guidance, it requires manual calibration, which can be time-consuming and demands user concentration [16]. Wei et al. proposed an AR-assisted collaborative assembly positioning technique based on distributed cognition. The technique allows multiple operators to work together by projecting advanced assembly instructions onto objects, facilitating information exchange and improving collaborative assembly [17]. Zhu et al. integrated digital twin data with physical objects to optimise the manufacturing process. By projecting digital twin data onto the object’s surface in real time, operators can better understand diverse data and indicators, leading to improved manufacturing outcomes [18]. Airbus and Testia developed an AR-based assembly inspection tool called Smart Mixed Reality (SMR). The SMR tool utilises marker-based AR registration to superimpose virtual brackets on physical brackets. However, it still relies on manual comparison and labelling by the inspector [19]. Li et al. integrated AR with deep learning to address the problem of pin mismatch inspection in complex aviation connectors. By combining the flexibility and mobility of AR with deep learning algorithms, they developed an automatic inspection service that improves inspection efficiency [20]. Abdallah et al. proposed an AR-based automatic assembly inspection system to address the limitations of manual comparison. However, their system utilised a traditional image segmentation algorithm that may struggle with complex and cluttered assembly scenarios, such as variations in lighting conditions, backgrounds, and product appearances [21]. Jia introduced a collision detection system, using AR technology to simulate real glass collision. This system helps lower the experimental costs and risks associated with collision testing by providing a virtual simulation environment [22]. The diverse applications of AR technology in manufacturing processes showcase its potential to enhance collaboration, improve operational efficiency, optimise processes, and address specific challenges in various domains.

2.2. Deep Learning Approach for Recycling and Disassembly

The disassembly process necessitates a significant amount of product information and extensive practical experience of extract product features to identify the target and determine the appropriate disassembly process. Methods related to disassembly learning focus on establishing mathematical models, typically based on experience or specific products. They apply knowledge from decommissioned products to disassembly planning, aiming to achieve simulation optimisation and to enhance its performance. However, differences in end-of-life product types and service conditions impact disassembly activities, and the dynamic uncertainty of the actual retired status affects the execution of the disassembly. Currently, research on the adaptive design and response process of disassembly process parameters is still insufficient, and intelligent optimization of the process design model in dynamic environments remains a challenge [23].
In the domain of intelligent manufacturing, the growing demand has led to the utilisation of intelligent algorithms in the manufacturing process, empowering production processes to achieve automation and intelligence [24]. The deep learning algorithm achieves self-reinforcement through continuous interaction with the environment, obtaining feedback and establishing a self-learning and self-evolution mechanism within the manufacturing process, enabling applications in industrial settings [25,26]. In the literature [27], a variable admittance human–machine interaction model based on fuzzy reinforcement learning was proposed. By incorporating human operation characteristics into the human–computer interaction process through online learning, the robot can adaptively adjust and respond to the human operator’s control intent. Furthermore, deep reinforcement learning was applied to obtain the control strategy of the robot, guiding the control process of human–robot collaboration and facilitating the perception of human body control strategies [28]. Furthermore, a generation-then-evaluation rule guided by reinforcement learning was proposed, effectively addressing the problem of rule mining in large-scale knowledge bases [29]. Based on integrated reinforcement learning, Li et al. developed a multi-level selective disassembly hybrid graph model, utilising the Markov decision method to generate optimal disassembly sequences under uncertainties of product structure and the usage stage of decommissioned products [30]. The maximum entropy reinforcement learning framework was developed to learn assembly strategies, using shaft-hole assembly as an example. This approach enables the transfer of skills between tasks, allowing its application in real high-precision assembly scenarios while minimising the need for real-world interaction [31,32,33]. Reinforcement learning can also be applied to rule mining and knowledge discovery in knowledge graphs. By utilising reinforcement learning agents, rule generation and agent operation guidance can be simultaneously achieved [34]. For instance, research utilizes reinforcement learning methods to analyse the actions of human operators in the assembly environment and predict their trajectories, effectively supporting the action planning and execution of online robots [35].
In the above-mentioned literature, either AR or deep learning-based approaches were implemented to facilitate the analytic and inferential capability for the industrial inspections, especially for the assembly of cabin parts, fuselages, brackets and so on. While this technology offer significant benefits in terms of improving efficiency, accuracy, and training in industrial processes, it is important to consider both its advantages and limitations. By overlaying digital information onto the physical objects, workers can gain insights, instructions, and visual aids that assist them in performing their tasks accurately and efficiently. For example, for those tasks that require precision and attention to detail, AR can provide step-by-step guidance, highlight the correct alignment, and offer information about the assembly process. However, it is noted that the reliance on manual input, calibration, and potentially labour-intensive setup could pose challenges in terms of implementation and usability. Few existing studies are concerned about the integration of both AR-assisted and deep learning techniques, especially in disassembly and recycling processes, compared to its potential in assembly and other production scenarios. The proposed research aims to focus on the less explored area of disassembly and recycling processes. These processes can be complex due to variable conditions, a lack of standardisation, and the intricacy of retired products. Disassembly poses unique challenges due to the need for accurate perception of complex scenes and the lack of disassembly assistance. These challenges can lead to longer disassembly times and reduced flexibility. Therefore, this research investigates an AR-assisted disassembly approach to tackle the challenges in weak scene perception and when disassembly assistance is lacking. This integrated approach can offer a flexible and adaptable solution for the disassembly of end-of-life vehicle power batteries, including the benefits like reducing the disassembly time, enhancing the flexibility of the disassembly process, and potentially improving the overall efficiency of recycling operations.

3. Methodology

This section provides a detailed introduction to the proposed AR-assisted disassembly approach. It emphasises the importance of accurate and efficient instance segmentation for intelligent disassembly processes. To achieve this, the approach integrates AR with a deep learning model to enable real-time parts segmentation on retired vehicle power batteries.

3.1. Overview of the Proposed Method

The current scenario in the end-of-life management of vehicle power batteries is characterised by the impending need for large-scale and automated processes. However, existing methods for recycling decommissioned power batteries primarily rely on manual or semi-automated disassembly techniques. This approach presents several challenges, including inconsistent disassembly quality, limited automation, and suboptimal safety measures. As a result, there is a pressing requirement to develop a disassembly method that is not only efficient and safe but also intelligent, catering to the industrial demands of power battery recycling. The assumptions made in this research include:
(1)
Manual and semi-automatic limitations: the inconsistencies in disassembly would be caused by human involvement. Manual or semi-automated disassembly have limitations in terms of scalability, throughput, and safety.
(2)
Imminent need for automation: the increasing amount of waste power batteries necessitates a swift transition to automated processes. The demand for efficient and effective approaches is driven by environmental concerns and economic assessment.
(3)
Variability in power battery types: end-of-life power batteries come in a wide range of sizes and configurations due to their diverse applications. The variability in power battery types adds complexity to the disassembly process, making a standardised approach challenging.
(4)
Flexibility requirement: the practical disassembly method should be adaptable to the wide range of retired power batteries, regarding their types and conditions. Overlaying AR visualisations onto the disassembly workspace could provide step-by-step instructions to the operation, so that the efficiency of the disassembly process is significantly enhanced.
This research leverages the combination of AR and deep learning techniques to enhance the efficiency of disassembly processes. First, an instance segmentation algorithm is applied to the images acquired by the AR device, enabling the segmentation of the mask information for the parts to be disassembled. The mask information is then used to generate the depth information of the target parts and reconstruct it into a point cloud instance with a spatial geometric structure. The spatial poses and positions of the disassembled targets are estimated using the Iterative Closest Point (ICP) algorithm. Subsequently, the disassembly steps are retrieved based on a predetermined disassembly sequence diagram, and the corresponding AR guidance is displayed to assist the worker in completing the disassembly. The proposed approach is divided into four steps, as illustrated in Figure 1.
In Step 1, the data are collected, using the wearable AR device, from the disassembly scene. These data are then transmitted to the server via the Transmission Control Protocol/Internet Protocol (TCP/IP), where it is processed to reduce the computational burden on the AR devices. Using the instance segmentation algorithm, i.e., Mask R-CNN, the server employs its abundant computing resources to locate and identify the disassembly parts within the acquired images. Utilising the instance segmentation results obtained in Step 2, the depth information of the disassembly parts is segmented from the depth map. This depth information is then reconstructed in a 3D format, resulting in the instance point cloud of the disassembly object. By performing ICP registration between the instance point cloud and the template point cloud, the spatial orientation information of the disassembly parts can be determined. In addition, the 3D model of the disassembly product includes the information on the parts’ relationships, which is the input for the generation of disassembly sequences. Given the predetermined disassembly sequence diagram, the network would evaluate the current disassembly processes. Furthermore, it renders the virtual parts, to be disassembled in the subsequent steps at appropriate positions, and provides suitable disassembly strategies. The AR-assisted disassembly instructions required for ongoing disassembly tasks are pre-generated. Once the AR device receives the information from the server, the specific disassembly instruction is prepared, which avoids the intervention of the human operator. In this way, the AR-assisted disassembly instruction is then attached to the practical industrial scene, using the Algorithm 1.
Algorithm 1 Scene perception based on instance segmentation.
Machines 11 01041 i001

3.2. Instance Segmentation and Post Estimation of Target Disassembly Parts

Traditional AR-aided methods rely on pre-defined programs that require user interaction to initiate visual instructions. This method, based on instance segmentation, perceives the current disassembly status of the scene and automatically triggers AR instructions that are relevant to the ongoing disassembly task. Therefore, it enhances the efficiency and effectiveness of the AR-assisted disassembly process while minimising distractions and interruptions for the workers.

3.2.1. Instance Segmentation of Disassembly Parts

Various methods for implementing a scene perception are analysed. Mask R-CNN has been widely used, with an excellent target detection accuracy and computational efficiency [36,37]. In addition, compared with YOLO and Fast R-CNN, mask R-CNN combines high accuracy with a good segmentation performance for complicated structures [38,39]. Thus, mask R-CNN is chosen as the basic framework for scene recognition in disassembly processes, as shown in Figure 2. The images are scanned using Region Proposal Network to generate candidate regions. Each candidate region is classified and localised to obtain the bounding box of the object. In the end, feature extraction is performed for the pixels in each bounding box, using CNN, to generate a mask based on the feature map.
This research employs a pre-trained Mask R-CNN model for end-to-end learning on the dataset of a retired vehicle power battery. ResNet 101 is used as a pre-training process to enhance the learning performance. Subsequently, the mask is detected within the bounding box region of the detection results. The parameters for deep learning-based instance segmentation, the development environment and the results are presented in Figure 3. RGB-D data are used in this research, which comprises conventional RGB images with three channels and a single-channel depth image. The RGB and depth maps are registered, ensuring pixel correspondence between them. The instance segmentation results can be utilised to segment the depth information of the disassembly objects in the scene, while irrelevant data are eliminated. Following 3D reconstruction, an instance of the disassembly object in the form of a point cloud is obtained. Compared to the original 3D point cloud, ICP matching provides higher accuracy and efficiency. The depth information obtained for the assembly objects can be converted into point cloud data with a geometric structure by utilising the mapping relationship between the camera and the physical space. The mapping relationship between the camera and the physical space is shown in Figure 4a. Two motions, Rc and tc (i.e., translation and rotation), are used to describe the transformation between the coordinate systems of the camera and the physical space. The point P (X,Y,Z) in the physical space and the point p (u,v) in the image coordinate system can be converted to each other. Each pixel in the depth image can be converted into the three-dimensional coordinate of a point in the physical space. Hence, the depth data corresponding to an individual disassembly part can be transformed into point cloud data specific to that part. Figure 4 demonstrates the reconstruction of the disassembly tool’s point cloud using the depth and mask information associated with that part.

3.2.2. Attitude Estimation of Disassembly Parts

To achieve scene perception, it is important to match the virtual 3D model with the actual object using the ICP algorithm. The target for this matching process is the 3D point cloud of the disassembly object. The point cloud of the 3D virtual disassembly object is designated as the source, and ICP registration is conducted to acquire the pose information of the object in the physical space. This information includes a rotation matrix, R, representing a 3 × 3 matrix indicating the required rotation of the model’s point cloud to align with the pose of the real object, as well as a translation matrix, t, representing a 3 × 1 vector indicating the translation needed for the virtual model’s point cloud to align with the position of the real object.
The ICP matching algorithm identifies the corresponding points between the target point cloud Q and the source point cloud P. It then applies the appropriate transformations, based on these corresponding points, to calculate the rotation matrix R and translation vector t. This calculation is achieved by minimising the geometric differences between the source and target point clouds. The source point cloud obtained through the depth camera is P = {p1, p2, p3…pn}, and the target point cloud is Cambria Math. The point cloud P represents the coordinate system of the camera, while the point cloud Q represents the coordinate system of the space where the point cloud of the 3D virtual disassembly object is situated. Subsequent to transforming the source point cloud, the transformed point cloud identifies the corresponding points. The error between the source point cloud and the target point cloud, when affected by the transformation matrix, is represented by the equations in Figure 5. The solution is to identify the nearest neighbour point (pi, qi) and then calculate the optimal matching parameters, R* and t*. In this way, the error function is minimised. If the error is below a threshold value set to ensure alignment accuracy, the corresponding transformation matrices R and t are outputs. Conversely, if the error exceeds the set threshold value, the optimisation of the transformation matrix continues iteratively.

3.3. Disassembly Instructions

3.3.1. Scene Awareness in the Disassembly Process

The obtained information (i.e., R and t) from the previous section can be used to derive the attitude information of the disassembly object in the scene, including the roll angle, pinch angle, and yaw angle, as well as the coordinate information. To improve the performance of the instance segmentation network in the AR-assisted disassembly environment, this section utilises specialised datasets that are closely relevant to the disassembly process for training and validating. The training dataset, comprising 1000 samples, is categorised based on the disassembly process of a retired vehicle power battery. During the training phase, the learning parameters and coefficients were iteratively adjusted to enhance the accuracy of the model.
The dataset is split into the training set, validation set, and test set, with a distribution ratio of 7:1:2 in terms of the number of images included. The training set is utilised to train the model and determine its parameters. The validation set is used to determine the network structure and adjust the model’s hyperparameters. The test set is employed to evaluate the model’s generalization ability. To present the practical disassembly scene, images were taken of the disassembly process from a first-person perspective were annotated, as illustrated in Figure 6.

3.3.2. AR-Assisted Disassembly Processes

Scene perception technology can identify the components, parts, and corresponding tools that need to be disassembled. However, a comprehensive presentation of all identification results to the user through augmented reality (AR) glasses, as depicted on the left side in Figure 7, can result in an information overload, hindering operators from comprehending the key points amidst the abundance of chaotic information. Moreover, a substantial quantity of instance segmentation results could obscure real objects, thereby impeding personnel operations. Hence, segmentation processing is essential in disassembly scenarios to provide appropriate disassembly guidance, display scene perception results, and coordinate subsequent operations. In this research, an approach for judging disassembly operations and recommending disassembly guidance is proposed, leveraging tool attention. As depicted in Figure 8, the real-time disassembly scene is initially captured through scene perception. The subsequent operations of the operator are then determined in the disassembly sequence, considering the tools utilised and the progress of module disassembly. Afterward, the scene undergoes segmentation, and the objects slated for dismantling are visualised to the operator through AR technology, serving as a guide during the disassembly process. For instance, when the operator grasps a cross screwdriver, scene recognition can identify that the operator is holding the screwdriver. Based on previous disassembly experience, it indicates that the operator intends to unscrew screws. Consequently, employing the tool attention mechanism, the AR glasses exclusively highlight and display all the screws that need to be unscrewed to the operator.

4. Experimental Results and Discussion

This research designs an AR-assisted disassembly system that employs a deep learning model. The system integrates process information from the disassembly process for the identification, location, and estimation of the disassembly parts and tools, thereby facilitating the analysis of the disassembly environment. Additionally, the system offers an AR auxiliary scene that dynamically presents the information from the analysis of the disassembly environment. The subsequent section outlines the system architecture and presents a case study to evaluate its performance.

4.1. Disassembly System Prototype

The developed AR-assisted disassembly system combines wearable AR technology, server-side processing, and custom data transmission methods to provide real-time disassembly instructions and assistance. By leveraging instance segmentation, depth data, and 3D models, the system offers valuable guidance to on-site operators, enhancing disassembly stability and efficiency. It consists of the design section, for the generation of AR-assisted disassembly instructions, and the application section, for the implementation (as shown in Figure 9). Within the design section, Unity3D, a development platform, is used to create the visual elements and spatial layout that are overlaid onto the physical view. Unity3D is employed to design the content and spatial position of the AR command, taking into consideration the process information of the product to be disassembled, as well as the user’s posture and position in physical space. Emphasis is placed on user-focused design, to prevent any obstruction of the user’s line of sight and ensure the completion of the primary scene in augmented reality. Additionally, the control script of the system is developed using Visual Studio, including loading disassembly animations, setting up communication interfaces, and processing data. It acts as the bridge between the Unity3D-designed visuals and the systems’ functionality. The application is designed for the Hololens, a mixed reality device. The application is then published to the Hololens client, allowing users to interact with the augmented content in their physical environment. In the application section, implementing the deep learning approach used in this study directly on mobile or wearable AR devices within the application environment is challenging, due to limitations in hardware and software. In order to address this challenge and enhance system responsiveness while alleviating the computing burden on the client, the system implements the client–cloud service model.
The wearable AR device serves as the client in the system and captures RGB-D data using its sensors. The captured data are then transmitted to the server, using the TCP network protocol. These data will be used for further processing and analysis. The server is responsible for processing the received data from the wearable AR device. Instance segmentation is performed on the captured data, identifying parts and tools within the AR image. The depth data of physical objects are segmented, based on the instance segmentation results, and then transformed into corresponding point clouds using the built-in matrix of the depth camera. This transformation ensures that the point clouds align correctly in the 3D space. During data processing, the generated point cloud undergoes denoising to improve data quality and accuracy. The denoised point cloud is matched with a 3D model to extract attitude information about the parts, including their orientation and position. Then, the attitude information and other relevant data are transmitted back to the AR client as a JSON file. JSON is a versatile data format that enables the processing of various types of structured information. Custom C# classes are developed, to facilitate efficient information transfer between AR devices and servers, enabling the storage and processing of various data types within JSON files. Lastly, the AR client uses the received attitude information to provide disassembly instructions and assistance. Text prompts or virtual guide arrows are placed in the AR view at specific positions corresponding to the parts’ poses. On-site operators utilise the AR system for disassembly, thereby enhancing both the stability of its quality and efficiency.

4.2. Case Validation

This study focuses on analysing the disassembly process of a specific type of end-of-life vehicle power battery as an illustrative example. The primary objective is to verify the disassembly scene perception function of the AR auxiliary system and experimentally evaluate its impact on disassembly efficiency.

4.2.1. Disassembly Scene Perception

The core function of the AR-aided disassembly system is disassembly scene perception, which facilitates various system assistance tasks, such as identifying the categories of currently disassembled parts and tools, estimating the pose of parts, and performing other related tasks. Mask R-CNN is a robust algorithm for performing instance segmentation. The instance segmentation results provide the type information and mask for the parts present in the current scene, which can then be utilised by the disassembly scene perception module within the system. The ICP algorithm is employed to align point clouds captured from different perspectives and derive the transformation matrix necessary for converting the source point cloud into the target point cloud. By utilising the ICP algorithm, the pose information of parts within the current disassembly environment can be obtained. To assess the real-time responsiveness of the system, the processing time of the scene awareness module is measured as it analyses the RGB-D data captured by the client to produce results. The computation requirements of the Mask R-CNN and ICP algorithms are substantial, rendering them inadequate for real-time detection of the disassembly environment. Consequently, the algorithm is activated every 10 s within the system to perceive the disassembly environment.
The scene awareness module triggers pertinent disassembly-related information, given the recognition results obtained from the current disassembly scene. The user wears HoloLens glasses for vehicle power battery disassembly, with the scene perception module processing the data captured by the HoloLens device. The respective recognition results displayed as virtual indicator boxes within the augmented reality space. Consequently, relevant disassembly process information and the working principle are visually presented as text within the augmented reality space. To improve the user experience, users have the option to activate AR-assisted functions at any point during use. Various working modes (Mode 1, Mode 2, and Mode 3, as shown in Figure 10) enhance user experience. These modes offer voice control capabilities and feature distinct user interfaces. For instance, Mode 1 employs a text-based interface to display specific disassembly instructions, including information about the required tools and task details. Mode 2 adopts an image-based interface, replacing text with visual representations of disassembly objects and tools. Mode 3 provides a simplified version that emphasizes the disassembly tasks and associated tools.

4.2.2. AR Disassembly System Evaluation

The effectiveness of the AR-assisted disassembly was evaluated by documenting the task completion status of a vehicle power battery among workers with diverse proficiency levels. The assessment was conducted using four groups (Group A to D), with each group comprising ten workers. Derived from the investigation conducted at disassembly factories, the primary evaluation criteria for disassembly operators encompassed three key facets: related working experience, disassembly efficiency, and disassembly quality. The usability study is shown in Appendix A. Consequently, the categorisation of groups was based on these criteria. Junior disassembly workers were defined as individuals with less than one year of experience in disassembly-related work. This classification also included those whose disassembly efficiency, as measured by metrics like monthly or quarterly disassembly volume/output, fell within the lowest 20% of their peers, or those whose disassembly quality (i.e., sorting purity of targets), ranked in the lowest 20% among their disassembly lines. Skilled workers were characterised by having more than three years of experience in disassembly-related roles. They also fell within the top 20% in terms of disassembly efficiency and disassembly quality. Group A consisted of junior workers who were guided by a technical manual, while group B comprised junior workers who received assistance from an AR-assisted system. Group C consisted of skilled workers without AR assistance, and group D comprised skilled workers equipped with wearable AR devices but who did not utilise the system for disassembly. To assess the impact of the AR-assisted disassembly system on the efficiency of non-technical workers, experiments were conducted on Groups A and B. Furthermore, experiments were also conducted on skilled workers, both with and without AR devices (Groups C and D, respectively), to determine if wearing AR devices would hinder their disassembly tasks. To mitigate potential randomness in the experiment, the experiment was carried out in three rounds, with ten replicates per round. Experimental results (i.e., disassembly time) were recorded. The results from three sets of parallel experiments were aggregated to form the final test results for this group. The experiment aimed to investigate the impact of the AR-assisted system on disassembly performance. To achieve this, workers with varying technical proficiency levels were carefully chosen, and a controlled variable methodology was employed for comparative experiments. Each experimental round involved the disassembly of identical batches of battery packs (same model and specification) by distinct groups of workers. To assess the effectiveness, it was important that each operation sought to meet the operating procedure and targets of the disassembly process. Consequently, the ultimate evaluation metric was established as the time needed to meet the disassembly quality standards. As can be seen in Table 1, it is evident that non-technical workers utilising the intelligent AR-assisted disassembly system achieved an average disassembly time of 316 s, which constituted an improvement in disassembly efficiency of 13.9%, compared to those guided by traditional paper technical manuals. It is noted that the proposed approach could effectively reduce the disparity between unskilled and skilled disassembly workers, providing greater benefits for unskilled workers. When skilled workers utilised AR glasses for assistance, the disassembly time was reduced to 83.3% of the original duration.
This research introduces an AR-assisted disassembly approach whose efficiency has been validated through the comparative assessment of overall disassembly time. The comparative evaluation of the proposed method was carried out under two typical conditions: (1) the selection of retired power batteries for this research was limited to mainstream models commonly encountered within the current recycling channels; (2) presently, established operational procedures and instructions for disassembly were available, which were complemented by disassembly performance indicators. Further investigations can be conducted in a more comprehensive manner. This includes identifying potential disassembly bottlenecks and conducting a detailed step-by-step comparison of the disassembly process.

5. Conclusions and Future Work

The increasing complexity of disassembly processes in highly precise and customizable products has led to the application of AR technology to enhance disassembly efficiency. Consequently, the demands for intelligent disassembly assistance methods based on AR have grown significantly. This research proposes an AR-assisted disassembly approach based on instance segmentation. This method enables the perception of the disassembly environment and provides intuitive visual guidance to users, aiming to enhance disassembly quality and efficiency in complex disassembly scenarios. The Mask RCNN instance segmentation algorithm is employed to analyse and understand the disassembly scene, thereby improving the interaction efficiency of the auxiliary system. Additionally, the posture and position of parts are determined through ICP registration, and the corresponding process and disassembly instructions are visually presented to assist workers in completing disassembly tasks. The effectiveness of this method was verified through multiple experiments.
Future research will address the complexity of disassembly scenes using scene graphs. Scene graphs offer a means to comprehend the disassembly environment, including the relationships between disassembly objects. This enables the provision of user-friendly and intelligent AR assistance.

Author Contributions

Conceptualization, J.L. and B.L.; methodology, J.L.; software, B.L.; validation, J.L. and L.D.; formal analysis, J.L. and B.L.; writing—original draft preparation, J.L.; writing—review and editing, J.L.; supervision, J.B.; project administration, J.L. and J.B.; funding acquisition, J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Municipal Natural Science Foundation of Shanghai [21ZR1400800].

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. List of the Questionnaire

Part 1. Personal details about operators’ background and experience.
1.1 What is your gender?
☐ Male ☐ Female
1.2 What is your age range?
☐ 20–29 years old ☐ 30–39 years old ☐ 40–49 years old ☐ 50–59 years old
1.3 What is your educational level?
☐ Junior high school or below the level
☐ Senior high school
☐ Junior college or above the level
1.4 How would you see yourself?
☐ Less than one year of experience in disassembly-related work
☐ One to three years of experience in disassembly-related work
☐ More than three years of experience in disassembly-related work
Part2. The adaptability analysis concerning the use of the AR-assisted system?
2.1 What is the most difficult thing in disassembly of end-of-life power batteries?
☐ To read reference/operating procedures and study on my own.
☐ To conduct disassembly tasks without instructions.
☐ To handily operate different disassembly tools for various types of targets.
☐ To improve the disassembly efficiency on my own manually (without hardware or software support)
2.2 What is your view about design in terms of current AR-assisted disassembly system?
☐ It is inconvenient for disassembly operations.
☐ It is involved and not user friendly.
☐ It works and prefer better interaction and communication.
☐ It helps and work perfectly.
2.3 Supposing the system provide different options to visualise the disassembly instruction, how many modes would you prefer to have?
☐ One
☐ Two
☐ Three
☐ Four and more
2.4 What is your attitude of application of AR-assisted tools for disassembly operations?
☐ I would be glad to do my best to try learning.
☐ I am afraid to use it.
☐ I want to try to use it but do not know how to use it.
☐ I would try and decide later.
Part 3 The acceptability inquiry in terms of subjects’ experience.
3.1 Learning AR-assisted techniques can satisfy my practical requirements.
☐ Disagree strongly ☐ Disagree ☐ No comment ☐ Agree ☐ Agree strongly
3.2 It is not difficult to learn AR related skills.
☐ Disagree strongly ☐ Disagree ☐ No comment ☐ Agree ☐ Agree strongly
3.3 I am able to manipulate the AR-assisted disassembly system after training.
☐ Disagree strongly ☐ Disagree ☐ No comment ☐ Agree ☐ Agree strongly
3.4 I do not feel uncomfortable or stressful while manipulate the system.
☐ Disagree strongly ☐ Disagree ☐ No comment ☐ Agree ☐ Agree strongly
3.5 I can well comprehend the information presented in the system.
☐ Disagree strongly ☐ Disagree ☐ No comment ☐ Agree ☐ Agree strongly
3.6 The way reading information is acceptable.
☐ Disagree strongly ☐ Disagree ☐ No comment ☐ Agree ☐ Agree strongly

References

  1. Athanasopoulou, L.; Bikas, H.; Papacharalampopoulos, A.; Stavropoulos, P.; Chryssolouris, G. An industry 4.0 approach toelectric vehicles. Int. J. Comput. Integr. Manuf. 2023, 36, 334–348. [Google Scholar] [CrossRef]
  2. Bibra, E.M.; Connelly, E.; Dhir, S.; Drtil, M.; Henriot, P.; Hwang, I.; Le Marois, J.B.; McBain, S.; Paoli, L.; Teter, J. Global EV Outlook2022: Securing Supplies for an Electric Future 2022. Available online: https://www.iea.org/events/global-ev-outlook-2022 (accessed on 30 October 2023).
  3. Xu, W.; Cui, J.; Liu, B.; Liu, J.; Yao, B.; Zhou, Z. Human-robot collaborative disassembly line balancing considering the safestrategy in remanufacturing. J. Clean. Prod. 2021, 324, 129158. [Google Scholar] [CrossRef]
  4. Poschmann, H.; Brueggemann, H.; Goldmann, D. Disassembly 4.0: A review on using robotics in disassembly tasks as a way of automation. Chem. Ing. Tech. 2020, 92, 341–359. [Google Scholar] [CrossRef]
  5. Tresca, G.; Cavone, G.; Carli, R.; Cerviotti, A.; Dotoli, M. Automating bin packing: A layer building matheuristics for cost effective logistics. IEEE Trans. Autom. Sci. Eng. 2022, 19, 1599–1613. [Google Scholar] [CrossRef]
  6. Reljić, V.; Milenković, I.; Dudić, S.; Šulc, J.; Bajči, B. Augmented reality applications in industry 4.0 environment. Appl. Sci. 2021, 11, 5592. [Google Scholar] [CrossRef]
  7. Masood, T.; Egger, J. Augmented reality in support of Industry 4.0—Implementation challenges and success factors. Robot. Comput.-Integr. Manuf. 2019, 58, 181–195. [Google Scholar] [CrossRef]
  8. Wang, X.; Ong, S.; Nee, A. Real-virtual components interaction for assembly simulation and planning. Robot. Comput.-Integr. Manuf. 2016, 41, 102–114. [Google Scholar] [CrossRef]
  9. Palmarini, R.; Erkoyuncu, J.A.; Roy, R.; Torabmostaedi, H. A systematic review of augmented reality applications in maintenance. Robot. Comput.-Integr. Manuf. 2018, 49, 215–228. [Google Scholar] [CrossRef]
  10. Ferraguti, F.; Pini, F.; Gale, T.; Messmer, F.; Storchi, C.; Leali, F.; Fantuzzi, C. Augmented reality based approach for on-line quality assessment of polished surfaces. Robot. Comput.-Integr. Manuf. 2019, 59, 158–167. [Google Scholar] [CrossRef]
  11. Hu, J.; Zhao, G.; Xiao, W.; Li, R. AR-based deep learning for real-time inspection of cable brackets in aircraft. Robot. Comput.-Integr. Manuf. 2023, 83, 102574. [Google Scholar] [CrossRef]
  12. Mourtzis, D.; Angelopoulos, J.; Panopoulos, N. Unmanned Aerial Vehicle (UAV) path planning and control assisted by Augmented Reality (AR): The case of indoor drones. Int. J. Prod. Res. 2023, 1–22. [Google Scholar] [CrossRef]
  13. Van Lopik, K.; Sinclair, M.; Sharpe, R.; Conway, P.; West, A. Developing augmented reality capabilities for industry 4.0 small enterprises: Lessons learnt from a content authoring case study. Comput. Ind. 2020, 117, 103208. [Google Scholar] [CrossRef]
  14. Liu, Y.; Li, S.; Wang, J.; Zeng, H.; Lu, J. A computer vision-based assistant system for the assembly of narrow cabin products. Int. J. Adv. Manuf. Technol. 2015, 76, 281–293. [Google Scholar] [CrossRef]
  15. Wang, X.; Ong, S.; Nee, A.Y.C. Multi-modal augmented-reality assembly guidance based on bare-hand interface. Adv. Eng. Inform. 2016, 30, 406–421. [Google Scholar] [CrossRef]
  16. de Souza Cardoso, L.F.; Mariano, F.C.M.Q.; Zorzal, E.R. Mobile augmented reality to support fuselage assembly. Comput. Ind. Eng. 2020, 148, 106712. [Google Scholar] [CrossRef]
  17. Fang, W.; Fan, W.; Ji, W.; Han, L.; Xu, S.; Zheng, L.; Wang, L. Distributed cognition based localization for AR-aided collaborative assembly in industrial environments. Robot. Comput.-Integr. Manuf. 2022, 75, 102292. [Google Scholar] [CrossRef]
  18. Zhu, Z.; Liu, C.; Xu, X. Visualisation of the digital twin data in manufacturing by using augmented reality. Procedia CIRP 2019, 81, 898–903. [Google Scholar] [CrossRef]
  19. Parsa, S.; Saadat, M. Human-robot collaboration disassembly planning for end-of-life product disassembly process. Robot. Comput.-Integr. Manuf. 2021, 71, 102170. [Google Scholar] [CrossRef]
  20. Li, S.; Zheng, P.; Zheng, L. An AR-assisted deep learning-based approach for automatic inspection of aviation connectors. IEEE Trans. Ind. Inform. 2020, 17, 1721–1731. [Google Scholar] [CrossRef]
  21. Ben Abdallah, H.; Jovančević, I.; Orteu, J.J.; Brèthes, L. Automatic inspection of aeronautical mechanical assemblies by matching the 3D CAD model and real 2D images. J. Imaging 2019, 5, 81. [Google Scholar] [CrossRef]
  22. Jia, C.; Liu, Z. Collision detection based on augmented reality for construction robot. In Proceedings of the 2020 5th International Conference on Advanced Robotics and Mechatronics (ICARM), Shenzhen, China, 18–21 December 2020; pp. 194–197. [Google Scholar]
  23. Liu, Q.; Liu, Z.; Xu, W.; Tang, Q.; Zhou, Z.; Pham, D.T. Human-robot collaboration in disassembly for sustainable manufacturing. Int. J. Prod. Res. 2019, 57, 4027–4044. [Google Scholar] [CrossRef]
  24. Schoettler, G.; Nair, A.; Ojea, J.A.; Levine, S.; Solowjow, E. Meta-reinforcement learning for robotic industrial insertion tasks. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October–24 January 2021; pp. 9728–9735. [Google Scholar]
  25. Kong, S.; Liu, C.; Shi, Y.; Xie, Y.; Wang, K. Review of application prospect of deep reinforcement learning in intelligent manufacturing. Comput. Eng. Appl. 2021, 57, 49–59. [Google Scholar]
  26. Ding, D.; Ding, Z.; Wei, G.; Han, F. An improved reinforcement learning algorithm based on knowledge transfer and applications in autonomous vehicles. Neurocomputing 2019, 361, 243–255. [Google Scholar] [CrossRef]
  27. Du, Z.J.; Wang, W.; Yan, Z.Y.; Dong, W.; Wang, W. A Physical Human-Robot Interaction Algorithm Based on Fuzzy Reinforcement Learning for Minimally Invasive Surgery Manipulator. Robot 2017, 39, 363–370. [Google Scholar]
  28. Jin, Z.-H.; Liu, A.-D.; Lu, L. Hierarchical Human-robot Cooperative Control Based on GPR and Deep Reinforcement. Acta Autom. Sin. 2022, 48, 2352–2360. [Google Scholar]
  29. Chen, L.; Jiang, S.; Liu, J.; Wang, C.; Zhang, S.; Xie, C.; Liang, J.; Xiao, Y.; Song, R. Rule mining over knowledge graphs via reinforcement learning. Knowl.-Based Syst. 2022, 242, 108371. [Google Scholar] [CrossRef]
  30. Zhao, X.; Li, C.; Tang, Y.; Cui, J. Reinforcement learning-based selective disassembly sequence planning for the end-of-life products with structure uncertainty. IEEE Robot. Autom. Lett. 2021, 6, 7807–7814. [Google Scholar] [CrossRef]
  31. Zhao, X.; Zhao, H.; Chen, P.; Ding, H. Model accelerated reinforcement learning for high precision robotic assembly. Int. J. Intell. Robot. Appl. 2020, 4, 202–216. [Google Scholar] [CrossRef]
  32. Luo, J.; Solowjow, E.; Wen, C.; Ojea, J.A.; Agogino, A.M. Deep reinforcement learning for robotic assembly of mixed deformable and rigid objects. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 2062–2069. [Google Scholar]
  33. Inoue, T.; De Magistris, G.; Munawar, A.; Yokoya, T.; Tachibana, R. Deep reinforcement learning for high precision assembly tasks. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 819–825. [Google Scholar]
  34. Arana-Arexolaleiba, N.; Urrestilla-Anguiozar, N.; Chrysostomou, D.; Bøgh, S. Transferring human manipulation knowledge to industrial robots using reinforcement learning. Procedia Manuf. 2019, 38, 1508–1515. [Google Scholar] [CrossRef]
  35. Zhang, J.; Liu, H.; Chang, Q.; Wang, L.; Gao, R.X. Recurrent neural network for motion trajectory prediction in human-robot collaborative assembly. CIRP Ann. 2020, 69, 9–12. [Google Scholar] [CrossRef]
  36. Moutselos, K.; Berdouses, E.; Oulis, C.; Maglogiannis, I. Recognizing occlusal caries in dental intraoral images using deep learning. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; pp. 1617–1620. [Google Scholar]
  37. Xiao, J.; Liu, G.; Wang, K.; Si, Y. Cow identification in free-stall barns based on an improved Mask R-CNN and an SVM. Comput. Electron. Agric. 2022, 194, 106738. [Google Scholar] [CrossRef]
  38. Zhu, G.; Piao, Z.; Kim, S.C. Tooth detection and segmentation with mask R-CNN. In Proceedings of the 2020 International Conference on Artificial Intelligence in Information and Communication (ICAIIC), Fukuoka, Japan, 19–21 February 2020; pp. 70–72. [Google Scholar]
  39. Rashid, U.; Javid, A.; Khan, A.R.; Liu, L.; Ahmed, A.; Khalid, O.; Saleem, K.; Meraj, S.; Iqbal, U.; Nawaz, R. A hybrid mask RCNN-based tool to localize dental cavities from real-time mixed photographic images. PeerJ Comput. Sci. 2022, 8, e888. [Google Scholar] [CrossRef]
Figure 1. The framework of the AR-assisted disassembly approach.
Figure 1. The framework of the AR-assisted disassembly approach.
Machines 11 01041 g001
Figure 2. Disassembly object recognition based on Mask R-CNN.
Figure 2. Disassembly object recognition based on Mask R-CNN.
Machines 11 01041 g002
Figure 3. System configuration and disassembly processes.
Figure 3. System configuration and disassembly processes.
Machines 11 01041 g003
Figure 4. Depth information segmentation of disassembly objects: (a) schematic diagram of mapping between camera imaging and physical space; (b) generation of instance point cloud of disassembled objects.
Figure 4. Depth information segmentation of disassembly objects: (a) schematic diagram of mapping between camera imaging and physical space; (b) generation of instance point cloud of disassembled objects.
Machines 11 01041 g004
Figure 5. Registration process using ICP matching algorithm.
Figure 5. Registration process using ICP matching algorithm.
Machines 11 01041 g005
Figure 6. Marking during disassembly process.
Figure 6. Marking during disassembly process.
Machines 11 01041 g006
Figure 7. Scene perception graph (left) and scene segmentation graph (right).
Figure 7. Scene perception graph (left) and scene segmentation graph (right).
Machines 11 01041 g007
Figure 8. Disassembly guidance flowchart.
Figure 8. Disassembly guidance flowchart.
Machines 11 01041 g008
Figure 9. The prototype of the AR-assisted disassembly system.
Figure 9. The prototype of the AR-assisted disassembly system.
Machines 11 01041 g009
Figure 10. Schematic diagram of AR-assisted disassembly.
Figure 10. Schematic diagram of AR-assisted disassembly.
Machines 11 01041 g010
Table 1. Comparison of different disassembly methods (unit: seconds).
Table 1. Comparison of different disassembly methods (unit: seconds).
First Round
(Unit: Second)
Second Round
(Unit: Second)
Third Round
(Unit: Second)
Average Time
(Unit: Second)
Group A325316307316
Group B277271268272
Group C261260276265
Group D267263260263
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, J.; Liu, B.; Duan, L.; Bao, J. An Augmented Reality-Assisted Disassembly Approach for End-of-Life Vehicle Power Batteries. Machines 2023, 11, 1041. https://doi.org/10.3390/machines11121041

AMA Style

Li J, Liu B, Duan L, Bao J. An Augmented Reality-Assisted Disassembly Approach for End-of-Life Vehicle Power Batteries. Machines. 2023; 11(12):1041. https://doi.org/10.3390/machines11121041

Chicago/Turabian Style

Li, Jie, Bo Liu, Liangliang Duan, and Jinsong Bao. 2023. "An Augmented Reality-Assisted Disassembly Approach for End-of-Life Vehicle Power Batteries" Machines 11, no. 12: 1041. https://doi.org/10.3390/machines11121041

APA Style

Li, J., Liu, B., Duan, L., & Bao, J. (2023). An Augmented Reality-Assisted Disassembly Approach for End-of-Life Vehicle Power Batteries. Machines, 11(12), 1041. https://doi.org/10.3390/machines11121041

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop