Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (10)

Search Parameters:
Keywords = autonomous robotic welding

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
55 pages, 20925 KiB  
Review
Current Trends and Emerging Strategies in Friction Stir Spot Welding for Lightweight Structures: Innovations in Tool Design, Robotics, and Composite Reinforcement—A Review
by Suresh Subramanian, Elango Natarajan, Ali Khalfallah, Gopal Pudhupalayam Muthukutti, Reza Beygi, Borhen Louhichi, Ramesh Sengottuvel and Chun Kit Ang
Crystals 2025, 15(6), 556; https://doi.org/10.3390/cryst15060556 - 11 Jun 2025
Cited by 1 | Viewed by 1933
Abstract
Friction stir spot welding (FSSW) is a solid-state joining technique increasingly favored in industries requiring high-quality, defect-free welds in lightweight and durable structures, such as the automotive, aerospace, and marine industries. This review examines the current advancements in FSSW, focusing on the relationships [...] Read more.
Friction stir spot welding (FSSW) is a solid-state joining technique increasingly favored in industries requiring high-quality, defect-free welds in lightweight and durable structures, such as the automotive, aerospace, and marine industries. This review examines the current advancements in FSSW, focusing on the relationships between microstructure, properties, and performance under load. FSSW offers numerous benefits over traditional welding, particularly for joining both similar and dissimilar materials. Key process parameters, including tool design, rotational speed, axial force, and dwell time, are discussed for their impact on weld quality. Innovations in robotics are enhancing FSSW’s accuracy and efficiency, while numerical simulations aid in optimizing process parameters and predicting material behavior. The addition of nano/microparticles, such as carbon nanotubes and graphene, has further improved weld strength and thermal stability. This review identifies areas for future research, including refining robotic programming, using artificial intelligence for autonomous welding, and exploring nano/microparticle reinforcement in FSSW composites. FSSW continues to advance solid-state joining technologies, providing critical insights for optimizing weld quality in sheet material applications. Full article
Show Figures

Figure 1

27 pages, 11130 KiB  
Article
A Dual-Modal Robot Welding Trajectory Generation Scheme for Motion Based on Stereo Vision and Deep Learning
by Xinlei Li, Jiawei Ma, Shida Yao, Guanxin Chi and Guangjun Zhang
Materials 2025, 18(11), 2593; https://doi.org/10.3390/ma18112593 - 1 Jun 2025
Viewed by 712
Abstract
To address the challenges of redundant point cloud processing and insufficient robustness under complex working conditions in existing teaching-free methods, this study proposes a dual-modal perception framework termed “2D image autonomous recognition and 3D point cloud precise planning”, which integrates stereo vision and [...] Read more.
To address the challenges of redundant point cloud processing and insufficient robustness under complex working conditions in existing teaching-free methods, this study proposes a dual-modal perception framework termed “2D image autonomous recognition and 3D point cloud precise planning”, which integrates stereo vision and deep learning. First, an improved U-Net deep learning model is developed, where VGG16 serves as the backbone network and a dual-channel attention module (DAM) is incorporated, achieving robust weld segmentation with a mean intersection over union (mIoU) of 0.887 and an F1-Score of 0.940. Next, the weld centerline is extracted using the Zhang–Suen skeleton refinement algorithm, and weld feature points are obtained through polynomial fitting optimization to establish cross-modal mapping between 2D pixels and 3D point clouds. Finally, a groove feature point extraction algorithm based on improved RANSAC combined with an equal-area weld bead filling strategy is designed to enable multi-layer and multi-bead robot trajectory planning, achieving a mean absolute error (MAE) of 0.238 mm in feature point positioning. Experimental results demonstrate that the method maintains high accuracy under complex working conditions such as noise interference and groove deformation, achieving a system accuracy of 0.208 mm and weld width fluctuation within ±0.15 mm, thereby significantly improving the autonomy and robustness of robot trajectory planning. Full article
(This article belongs to the Section Materials Simulation and Design)
Show Figures

Figure 1

18 pages, 6634 KiB  
Article
Development and Evaluation of a Multiaxial Modular Ground Robot for Estimating Soybean Phenotypic Traits Using an RGB-Depth Sensor
by James Kemeshi, Young Chang, Pappu Kumar Yadav, Maitiniyazi Maimaitijiang and Graig Reicks
AgriEngineering 2025, 7(3), 76; https://doi.org/10.3390/agriengineering7030076 - 11 Mar 2025
Viewed by 1326
Abstract
Achieving global sustainable agriculture requires farmers worldwide to adopt smart agricultural technologies, such as autonomous ground robots. However, most ground robots are either task- or crop-specific and expensive for small-scale farmers and smallholders. Therefore, there is a need for cost-effective robotic platforms that [...] Read more.
Achieving global sustainable agriculture requires farmers worldwide to adopt smart agricultural technologies, such as autonomous ground robots. However, most ground robots are either task- or crop-specific and expensive for small-scale farmers and smallholders. Therefore, there is a need for cost-effective robotic platforms that are modular by design and can be easily adapted to varying tasks and crops. This paper describes the hardware design of a unique, low-cost multiaxial modular agricultural robot (ModagRobot), and its field evaluation for soybean phenotyping. The ModagRobot’s chassis was designed without any welded components, making it easy to adjust trackwidth, height, ground clearance, and length. For this experiment, the ModagRobot was equipped with an RGB-Depth (RGB-D) sensor and adapted to safely navigate over soybean rows to collect RGB-D images for estimating soybean phenotypic traits. RGB images were processed using the Excess Green Index to estimate the percent canopy ground coverage area. 3D point clouds generated from RGB-D images were used to estimate canopy height (CH) and the 3D Profile Index of sample plots using linear regression. Aboveground biomass (AGB) was estimated using extracted phenotypic traits. Results showed an R2, RMSE, and RRMSE of 0.786, 0.0181 m, and 2.47%, respectively, between estimated CH and measured CH. AGB estimated using all extracted traits showed an R2, RMSE, and RRMSE of 0.59, 0.0742 kg/m2, and 8.05%, respectively, compared to the measured AGB. The results demonstrate the effectiveness of the ModagRobot for in-row crop phenotyping. Full article
Show Figures

Figure 1

19 pages, 6503 KiB  
Article
Weld Seam Tracking and Detection Robot Based on Artificial Intelligence Technology
by Jiuxin Wang, Lei Huang, Jiahui Yao, Man Liu, Yurong Du, Minghu Zhao, Yaoheng Su and Dingze Lu
Sensors 2023, 23(15), 6725; https://doi.org/10.3390/s23156725 - 27 Jul 2023
Cited by 7 | Viewed by 4011
Abstract
The regular detection of weld seams in large-scale special equipment is crucial for improving safety and efficiency, and this can be achieved effectively through the use of weld seam tracking and detection robots. In this study, a wall-climbing robot with integrated seam tracking [...] Read more.
The regular detection of weld seams in large-scale special equipment is crucial for improving safety and efficiency, and this can be achieved effectively through the use of weld seam tracking and detection robots. In this study, a wall-climbing robot with integrated seam tracking and detection was designed, and the wall climbing function was realized via a permanent magnet array and a Mecanum wheel. The function of weld seam tracking and detection was realized using a DeepLabv3+ semantic segmentation model. Several optimizations were implemented to enhance the deployment of the DeepLabv3+ semantic segmentation model on embedded devices. Mobilenetv2 was used to replace the feature extraction network of the original model, and the convolutional block attention module attention mechanism was introduced into the encoder module. All traditional 3×3 convolutions were substituted with depthwise separable dilated convolutions. Subsequently, the welding path was fitted using the least squares method based on the segmentation results. The experimental results showed that the volume of the improved model was reduced by 92.9%, only being 21.8 Mb. The average precision reached 98.5%, surpassing the original model by 1.4%. The reasoning speed was accelerated to 21 frames/s, satisfying the real-time requirements of industrial detection. The detection robot successfully realizes the autonomous identification and tracking of weld seams. This study remarkably contributes to the development of automatic and intelligent weld seam detection technologies. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

18 pages, 10943 KiB  
Article
Weld Seam Identification and Tracking of Inspection Robot Based on Deep Learning Network
by Jie Li, Beibei Li, Linjie Dong, Xingsong Wang and Mengqian Tian
Drones 2022, 6(8), 216; https://doi.org/10.3390/drones6080216 - 20 Aug 2022
Cited by 26 | Viewed by 5713
Abstract
The weld seams of large spherical tank equipment should be regularly inspected. Autonomous inspection robots can greatly enhance inspection efficiency and save costs. However, the accurate identification and tracking of weld seams by inspection robots remains a challenge. Based on the designed wall-climbing [...] Read more.
The weld seams of large spherical tank equipment should be regularly inspected. Autonomous inspection robots can greatly enhance inspection efficiency and save costs. However, the accurate identification and tracking of weld seams by inspection robots remains a challenge. Based on the designed wall-climbing robot, an intelligent inspection robotic system based on deep learning is proposed to achieve the weld seam identification and tracking in this study. The inspection robot used mecanum wheels and permanent magnets to adsorb metal walls. In the weld seam identification, Mask R-CNN was used to segment the instance of weld seams. Through image processing combined with Hough transform, weld paths were extracted with a high accuracy. The robotic system efficiently completed the weld seam instance segmentation through training and learning with 2281 weld seam images. Experimental results indicated that the robotic system based on deep learning was faster and more accurate than previous methods, and the average time of identifying and calculating weld paths was about 180 ms, and the mask average precision (AP) was about 67.6%. The inspection robot could automatically track seam paths, and the maximum drift angle and offset distance were 3° and 10 mm, respectively. This intelligent weld seam identification system will greatly promote the application of inspection robots. Full article
(This article belongs to the Special Issue Intelligent Recognition and Detection for Unmanned Systems)
Show Figures

Figure 1

18 pages, 6819 KiB  
Article
Gird Based Line Segment Detector and Application: Vision System for Autonomous Ship Small Assembly Line
by Jinhong Ding and Chongben Ni
J. Mar. Sci. Eng. 2021, 9(11), 1313; https://doi.org/10.3390/jmse9111313 - 22 Nov 2021
Cited by 3 | Viewed by 2810
Abstract
The shipbuilding industry demands intelligent robot, which is capable of various tasks without laborious pre-teaching or programming. Vision system guided robots could be a solution for autonomous working. This paper introduces the principle and technique details of a vision system that guides welding [...] Read more.
The shipbuilding industry demands intelligent robot, which is capable of various tasks without laborious pre-teaching or programming. Vision system guided robots could be a solution for autonomous working. This paper introduces the principle and technique details of a vision system that guides welding robots in ship small assembly production. TOF sensors are employed to collect spatial points of workpieces. Huge data amount and complex topology bring great difficulty in the reconstruction of small assemblies. A new unsupervised line segment detector is proposed to reconstruct ship small assemblies from spatial points. Verified using data from actual manufacturing, the method of this paper demonstrated good robustness which is a great advantage for industrial applications. This paper’s work has been implemented in shipyards and shows good commercial potential. Intelligent, flexible industrial robots could be implemented with the findings of this study, which will push forward intelligent manufacturing in the shipbuilding industry. Full article
(This article belongs to the Special Issue Smart Technologies for Shipbuilding)
Show Figures

Figure 1

15 pages, 6251 KiB  
Article
Using a Stochastic Agent Model to Optimize Performance in Divergent Interest Tacit Coordination Games
by Dor Mizrahi, Inon Zuckerman and Ilan Laufer
Sensors 2020, 20(24), 7026; https://doi.org/10.3390/s20247026 - 8 Dec 2020
Cited by 18 | Viewed by 2659
Abstract
In recent years collaborative robots have become major market drivers in industry 5.0, which aims to incorporate them alongside humans in a wide array of settings ranging from welding to rehabilitation. Improving human–machine collaboration entails using computational algorithms that will save processing as [...] Read more.
In recent years collaborative robots have become major market drivers in industry 5.0, which aims to incorporate them alongside humans in a wide array of settings ranging from welding to rehabilitation. Improving human–machine collaboration entails using computational algorithms that will save processing as well as communication cost. In this study we have constructed an agent that can choose when to cooperate using an optimal strategy. The agent was designed to operate in the context of divergent interest tacit coordination games in which communication between the players is not possible and the payoff is not symmetric. The agent’s model was based on a behavioral model that can predict the probability of a player converging on prominent solutions with salient features (e.g., focal points) based on the player’s Social Value Orientation (SVO) and the specific game features. The SVO theory pertains to the preferences of decision makers when allocating joint resources between themselves and another player in the context of behavioral game theory. The agent selected stochastically between one of two possible policies, a greedy or a cooperative policy, based on the probability of a player to converge on a focal point. The distribution of the number of points obtained by the autonomous agent incorporating the SVO in the model was better than the results obtained by the human players who played against each other (i.e., the distribution associated with the agent had a higher mean value). Moreover, the distribution of points gained by the agent was better than any of the separate strategies the agent could choose from, namely, always choosing a greedy or a focal point solution. To the best of our knowledge, this is the first attempt to construct an intelligent agent that maximizes its utility by incorporating the belief system of the player in the context of tacit bargaining. This reward-maximizing strategy selection process based on the SVO can also be potentially applied in other human–machine contexts, including multiagent systems. Full article
(This article belongs to the Special Issue Human-Computer Interaction in Smart Environments)
Show Figures

Figure 1

30 pages, 4679 KiB  
Article
Human–Robot Interface for Embedding Sliding Adjustable Autonomy Methods
by Piatan Sfair Palar, Vinícius de Vargas Terres and André Schneider de Oliveira
Sensors 2020, 20(20), 5960; https://doi.org/10.3390/s20205960 - 21 Oct 2020
Cited by 4 | Viewed by 3598
Abstract
This work discusses a novel human–robot interface for a climbing robot for inspecting weld beads in storage tanks in the petrochemical industry. The approach aims to adapt robot autonomy in terms of the operator’s experience, where a remote industrial joystick works in conjunction [...] Read more.
This work discusses a novel human–robot interface for a climbing robot for inspecting weld beads in storage tanks in the petrochemical industry. The approach aims to adapt robot autonomy in terms of the operator’s experience, where a remote industrial joystick works in conjunction with an electromyographic armband as inputs. This armband is worn on the forearm and can detect gestures from the operator and rotation angles from the arm. Information from the industrial joystick and the armband are used to control the robot via a Fuzzy controller. The controller works with sliding autonomy (using as inputs data from the angular velocity of the industrial controller, electromyography reading, weld bead position in the storage tank, and rotation angles executed by the operator’s arm) to generate a system capable of recognition of the operator’s skill and correction of mistakes from the operator in operating time. The output from the Fuzzy controller is the level of autonomy to be used by the robot. The levels implemented are Manual (operator controls the angular and linear velocities of the robot); Shared (speeds are shared between the operator and the autonomous system); Supervisory (robot controls the angular velocity to stay in the weld bead, and the operator controls the linear velocity); Autonomous (the operator defines endpoint and the robot controls both linear and angular velocities). These autonomy levels, along with the proposed sliding autonomy, are then analyzed through robot experiments in a simulated environment, showing each of these modes’ purposes. The proposed approach is evaluated in virtual industrial scenarios through real distinct operators. Full article
(This article belongs to the Special Issue Human-Robot Interaction and Sensors for Social Robotics)
Show Figures

Figure 1

22 pages, 5380 KiB  
Article
An Agent-Based System for Automated Configuration and Coordination of Robotic Operations in Real Time—A Case Study on a Car Floor Welding Process
by Sotiris Makris, Kosmas Alexopoulos, George Michalos and Andreas Sardelis
J. Manuf. Mater. Process. 2020, 4(3), 95; https://doi.org/10.3390/jmmp4030095 - 18 Sep 2020
Cited by 9 | Viewed by 3703
Abstract
This paper investigates the feasibility of using an agent-based framework to configure, control and coordinate dynamic, real-time robotic operations with the use of ontology manufacturing principles. Production automation agents use ontology models that represent the knowledge in a manufacturing environment for control and [...] Read more.
This paper investigates the feasibility of using an agent-based framework to configure, control and coordinate dynamic, real-time robotic operations with the use of ontology manufacturing principles. Production automation agents use ontology models that represent the knowledge in a manufacturing environment for control and configuration purposes. The ontological representation of the production environment is discussed. Using this framework, the manufacturing resources are capable of autonomously embedding themselves into the existing manufacturing enterprise with minimal human intervention, while, at the same time, the coordination of manufacturing operations is achieved without extensive human involvement. The specific framework was implemented, tested and validated in a feasibility study upon a laboratory robotic assembly cell with typical industrial components, using real data derived from a car-floor welding process. Full article
(This article belongs to the Special Issue Cyber Physical Production Systems)
Show Figures

Figure 1

23 pages, 7277 KiB  
Article
Self-Organization and Self-Coordination in Welding Automation with Collaborating Teams of Industrial Robots
by Günther Starke, Daniel Hahn, Diana G. Pedroza Yanez and Luz M. Ugalde Leal
Machines 2016, 4(4), 23; https://doi.org/10.3390/machines4040023 - 30 Nov 2016
Cited by 8 | Viewed by 7410
Abstract
In welding automation, growing interest can be recognized in applying teams of industrial robots to perform manufacturing processes through collaboration. Although robot teamwork can increase profitability and cost-effectiveness in production, the programming of the robots is still a problem. It is extremely time [...] Read more.
In welding automation, growing interest can be recognized in applying teams of industrial robots to perform manufacturing processes through collaboration. Although robot teamwork can increase profitability and cost-effectiveness in production, the programming of the robots is still a problem. It is extremely time consuming and requires special expertise in synchronizing the activities of the robots to avoid any collision. Therefore, a research project has been initiated to solve those problems. This paper will present strategies, concepts, and research results in applying robot operating system (ROS) and ROS-based solutions to overcome existing technical deficits through the integration of self-organization capabilities, autonomous path planning, and self-coordination of the robots’ work. The new approach should contribute to improving the application of robot teamwork and collaboration in the manufacturing sector at a higher level of flexibility and reduced need for human intervention. Full article
(This article belongs to the Special Issue Mechatronics: Intelligent Machines)
Show Figures

Figure 1

Back to TopTop