Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,387)

Search Parameters:
Keywords = robot machining

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
47 pages, 21577 KB  
Review
Modern Control Meets Machine Learning: A Review and Taxonomy of Synergistic Approaches for Robotics Applications
by Xiangyu Zhang, Guowei Li, Shahab Shokouhi and May-Win L. Thein
Actuators 2026, 15(5), 235; https://doi.org/10.3390/act15050235 (registering DOI) - 27 Apr 2026
Abstract
This paper explores the emerging synergy between control theory and machine learning in robotics, focusing on methods that combine model-based strategies with data-driven adaptation. The authors highlight how classical techniques, such as model predictive control and adaptive control, are being enhanced by reinforcement [...] Read more.
This paper explores the emerging synergy between control theory and machine learning in robotics, focusing on methods that combine model-based strategies with data-driven adaptation. The authors highlight how classical techniques, such as model predictive control and adaptive control, are being enhanced by reinforcement learning, imitation learning, and neural models to address challenges in complex, uncertain environments. Emphasis is placed on real-world platforms (e.g., legged systems, aerial robots, and manipulators) with special attention to advanced domains such as multi-agent systems and coordination. The authors, in addition, establish a taxonomy to categorize these hybrid approaches as “learning-for-control”, “control-for-learning”, or “co-designed architectures”. This paper also reflects upon key open problems, including sim-to-real transfer, safety, and the need for verifiable learning-based controllers, all facets that help to outline a roadmap for future research. Full article
(This article belongs to the Special Issue Advanced Learning and Intelligent Control Algorithms for Robots)
Show Figures

Figure 1

32 pages, 18066 KB  
Article
Grapevine Winter Pruning Point Localization Using YOLO-Based Instance Segmentation
by Magdalena Kapłan and Kamil Buczyński
Agriculture 2026, 16(9), 943; https://doi.org/10.3390/agriculture16090943 - 24 Apr 2026
Viewed by 243
Abstract
Winter pruning is a key management practice in viticulture that directly affects vine architecture, yield balance, and grape quality. At the same time, it is a highly labor-intensive operation, and the selective identification of appropriate cutting locations remains one of the main challenges [...] Read more.
Winter pruning is a key management practice in viticulture that directly affects vine architecture, yield balance, and grape quality. At the same time, it is a highly labor-intensive operation, and the selective identification of appropriate cutting locations remains one of the main challenges limiting the automation of pruning in vineyards. Advances in machine vision provide new opportunities to support the development of robotic pruning systems. The objective of this study was to develop and evaluate a vision-based method for estimating grapevine pruning points and cutting lines using instance segmentation outputs generated by YOLO models. A dataset of 1500 RGB images of dormant grapevines was collected under field conditions in the Nobilis vineyard located in southeastern Poland. Two annotation strategies were implemented to define pruning regions. YOLO-based instance segmentation models were trained and evaluated for detecting cutting-related structures. Based on the predicted segmentation masks, a geometry-based method termed PCAcutSeg-V was developed to estimate class-dependent cutting points and cutting lines using principal component analysis applied to object contours. The results indicate that YOLOv8 and YOLO11 architectures achieved the highest segmentation performance among the evaluated models. The simplified annotation strategy provided more stable geometric inputs for the PCAcutSeg-V method, enabling more reliable estimation of cutting points and cutting lines compared with the extended annotation approach. When combined with the PCAcutSeg-V method, the proposed perception–geometry pipeline achieved high effectiveness in pruning decision estimation. The method was further implemented in a real-time processing pipeline using an RGB camera and an edge computing platform, where it maintained performance consistent with the results obtained from offline image analysis. These findings demonstrate that combining deep learning-based instance segmentation with deterministic geometric reasoning enables accurate and interpretable estimation of grapevine pruning locations and provides a promising foundation for future autonomous pruning systems. Full article
Show Figures

Figure 1

18 pages, 4323 KB  
Article
Real-Time Pose Correction of an Industrial Robot for Machining Using Photogrammetry
by Roberto Alonso, Beñat Iñigo, Ibai Leizea, Pedro González de Alaiza Martínez, Jon Lopez de Zubiria and Jokin Munoa
J. Manuf. Mater. Process. 2026, 10(5), 147; https://doi.org/10.3390/jmmp10050147 - 23 Apr 2026
Viewed by 250
Abstract
A photogrammetry-based error compensation solution, comprising calibration, positioning compensation and accuracy validation methodologies, is presented to the aerospace sector, able to assist industrial robots in manufacturing new composite materials, offering versatility and reconfigurability at a lower cost than that resulting from the currently [...] Read more.
A photogrammetry-based error compensation solution, comprising calibration, positioning compensation and accuracy validation methodologies, is presented to the aerospace sector, able to assist industrial robots in manufacturing new composite materials, offering versatility and reconfigurability at a lower cost than that resulting from the currently used milling machines. Against a ground truth measured by a laser tracker, it has boosted, in real time, the accuracy level from ±0.685 to ±0.203 mm, on average, and from ±1.621 to ±0.498 mm at peak, following the ISO 9283 standard, and from ±0.534 to ±0.080 mm, on average, and from ±1.804 to ±0.456 mm at peak, with a real part in a large volume under industrial operating conditions, taking into account occlusions and showing robustness against the impact of the payload, the waviness, and the backlash. Full article
(This article belongs to the Special Issue Next-Generation Machine Tools and Machining Technology)
Show Figures

Figure 1

29 pages, 637 KB  
Article
The Low-Carbon Policy and Urban Industrial Transformation: Evidence from China’s Low-Carbon City Pilot Using Double Machine Learning
by Yu Li, Zhenghuang Shi, Wenhui Chen, Yibai Wang and Yiwen Ye
Sustainability 2026, 18(8), 4088; https://doi.org/10.3390/su18084088 - 20 Apr 2026
Viewed by 171
Abstract
China’s Against the backdrop of the global low-carbon transition, balancing ecological protection and economic development has become a critical challenge. This study aims to examine whether and how the Low-Carbon City Pilot (LCCP) policy enhances urban industrial transformation momentum. Using panel data from [...] Read more.
China’s Against the backdrop of the global low-carbon transition, balancing ecological protection and economic development has become a critical challenge. This study aims to examine whether and how the Low-Carbon City Pilot (LCCP) policy enhances urban industrial transformation momentum. Using panel data from 283 Chinese cities during 2008–2023, we employ a double machine learning (DDML) approach and use industrial robot installation density as a proxy for industrial development momentum. The results show that the LCCP policy significantly promotes industrial transformation and upgrading. Mechanism analysis indicates that the policy strengthens transformation momentum by enhancing government support and increasing public environmental awareness, particularly in cities with lower innovation costs. The effects are more pronounced in resource-based cities, non-old industrial bases, and economically developed cities, while also exacerbating regional disparities as more developed cities benefit more. These findings provide important implications for achieving coordinated development between carbon reduction and industrial transformation. Full article
(This article belongs to the Special Issue Advances in Low-Carbon Economy Towards Sustainability)
Show Figures

Figure 1

24 pages, 45558 KB  
Article
Pose- and Direction-Dependent Modulation and Accuracy in Robotic Milling
by Chandan, Daksh Singh Chauhan, Nalli Gnaneswara Rao, Ranjeet Kumar, Sajan Kapil and Mohit Law
J. Manuf. Mater. Process. 2026, 10(4), 137; https://doi.org/10.3390/jmmp10040137 - 19 Apr 2026
Viewed by 528
Abstract
Robotic milling offers flexibility and lower capital cost than conventional CNC machining but is limited by low, pose-dependent structural stiffness. This study experimentally investigates how pose, cutting orientation, and engagement conditions govern dynamic response and machining accuracy, benchmarked against a CNC machine under [...] Read more.
Robotic milling offers flexibility and lower capital cost than conventional CNC machining but is limited by low, pose-dependent structural stiffness. This study experimentally investigates how pose, cutting orientation, and engagement conditions govern dynamic response and machining accuracy, benchmarked against a CNC machine under matched conditions. Tool-point frequency response functions show that the robot exhibits dominant low-frequency structural modes at 8–15 Hz with compliances on the order of 10−5 m/N, one to two orders of magnitude more flexible than higher-frequency tool–holder modes (~10−6 m/N). In contrast, the CNC system is dominated by a stiff mode near 600 Hz (~2 × 10−7 m/N) with negligible low-frequency compliance. During cutting, the response is not resonance-driven; instead, low-frequency compliance induces modulation of spindle-synchronous vibrations, resulting in broadband spectral spreading and cycle-to-cycle variability. Poincaré analysis captures this modulation, which increases with spindle speed and depth of cut. Orientation-dependent alignment with compliant directions amplifies vibration and cross-axis coupling. Regression analysis shows a significant association between Z-direction vibration and depth-of-cut deviation (R = 0.739 locally; R = 0.363 globally). The results establish a framework linking compliance, modulation, and machining performance in robotic milling. Full article
(This article belongs to the Special Issue New Trends in Precision Machining Processes)
Show Figures

Figure 1

28 pages, 10998 KB  
Article
Introducing Brain–Computer Interfaces in Factories and Fabrication Lines for the Inclusion of Disabled Workers–Industry 5.0—A Modern Challenge and Opportunity
by Marian-Silviu Poboroniuc, Zoltán Nochta, Martin Klepal, Nina Hunter, Danut-Constantin Irimia, Alina Georgiana Baciu, Kelaja Schert, Tim Piotrowski and Alexandru Mitocaru
Multimodal Technol. Interact. 2026, 10(4), 41; https://doi.org/10.3390/mti10040041 - 17 Apr 2026
Viewed by 206
Abstract
Flexible factories and adaptive fabrication lines offer a testbed for advanced multimodal interaction concepts that can support the inclusion of disabled workers in Industry 5.0 manufacturing systems. The study synthesizes interdisciplinary data from ergonomics, industrial automation, and EU regulatory frameworks to establish a [...] Read more.
Flexible factories and adaptive fabrication lines offer a testbed for advanced multimodal interaction concepts that can support the inclusion of disabled workers in Industry 5.0 manufacturing systems. The study synthesizes interdisciplinary data from ergonomics, industrial automation, and EU regulatory frameworks to establish a conceptual model for human-machine interaction. Building on conceptual modeling and a structured literature analysis, the study proposes a six-step integration framework that links task demands, worker capabilities, and interaction modalities within human-in-the-loop manufacturing environments. Although no empirical case study was conducted in this phase, an exemplary application is presented for a semi-automated bike wheel manufacturing process. Detailed machine-based assembly line flows and simulated process data were utilized for illustrative purposes to depict the process and validate the proposed Capability–Task Matching Matrix. The results operationalize the human-centric vision of Industry 5.0 by providing a structured methodology for the inclusion of disabled workers within fabrication environments. The findings are organized into two primary components: the conceptual development of the Integration Approach and its practical application to a semi-automated industrial use-case. Finally, a particular focus is placed on Brain–Computer Interfaces (BCIs) as an emerging interaction channel that enables non-muscular control, attention monitoring, and neuroadaptive feedback, complementing conventional interfaces rather than replacing them. The framework is illustrated through application to the same semi-automated bicycle wheel assembly line, where BCI-supported interaction, augmented interfaces, and robotic assistance are mapped to specific production tasks and assessed in terms of feasibility and technological maturity. Drawing on the paper’s results, an explanatory 10-year roadmap outlines the feasibility and phased deployment of BCI solutions. It aligns technological advances with European regulations and a vision for a fully inclusive manufacturing enterprise. Full article
Show Figures

Figure 1

23 pages, 4740 KB  
Article
Hierarchical Fuzzy-Enhanced Soft-Constrained Model Predictive Control for Curvilinear Path Tracking in Autonomous Agricultural Machines
by Baidong Zhao, Chenghan Yang, Gang Zheng, Baurzhan Belgibaev, Madina Mansurova, Sholpan Jomartova and Dingkun Zheng
AgriEngineering 2026, 8(4), 156; https://doi.org/10.3390/agriengineering8040156 - 14 Apr 2026
Viewed by 337
Abstract
Precise curvilinear path tracking remains a persistent challenge for autonomous agricultural machines, where conventional Model Predictive Control (MPC) suffers from poor adaptability to varying curvatures and high computational overhead in unstructured farmland environments. This paper proposes a soft-constrained MPC framework enhanced by a [...] Read more.
Precise curvilinear path tracking remains a persistent challenge for autonomous agricultural machines, where conventional Model Predictive Control (MPC) suffers from poor adaptability to varying curvatures and high computational overhead in unstructured farmland environments. This paper proposes a soft-constrained MPC framework enhanced by a two-layer fuzzy architecture and Recursive Least Squares filtering to address these limitations simultaneously. The first fuzzy layer dynamically adjusts the MPC prediction horizon in response to real-time path curvature, enabling proactive steering on complex curved trajectories. The second fuzzy layer tunes the state weighting matrix online based on lateral and heading deviations, improving transient tracking accuracy without increasing computational cost. Recursive Least Squares filtering is further integrated to suppress sensor noise and compensate for tire slip dynamics inherent to farmland operation. The proposed framework is validated using MATLAB simulations on both constant-curvature semicircular paths and variable-curvature S-curve trajectories at operational speeds of 2.0 and 2.5 m/s, followed by outdoor field trials on a scaled autonomous robot platform. Simulation results demonstrate average tracking error reductions of 52.7–55.9% on constant-curvature paths and 10.8–18.2% on variable-curvature paths compared to fixed-parameter soft-constrained MPC. Field experiments confirm practical viability, achieving an RMS lateral error of 0.131 m over a 50 m curved route on natural terrain. These results demonstrate that the hierarchical decomposition of adaptation objectives yields substantial accuracy gains while preserving real-time feasibility on resource-constrained embedded platforms. Full article
Show Figures

Figure 1

13 pages, 4062 KB  
Article
Robotic Harvesting of Apples Using ROS2
by Connor Ruybalid, Christian Salisbury and Duke M. Bulanon
Machines 2026, 14(4), 433; https://doi.org/10.3390/machines14040433 - 14 Apr 2026
Viewed by 391
Abstract
Rising global food demand, increasing labor costs, and farm labor shortages have created significant challenges for specialty crop production, particularly in labor-intensive tasks such as fruit harvesting. Robotic harvesting offers a promising long-term solution, yet its adoption in orchard environments remains limited due [...] Read more.
Rising global food demand, increasing labor costs, and farm labor shortages have created significant challenges for specialty crop production, particularly in labor-intensive tasks such as fruit harvesting. Robotic harvesting offers a promising long-term solution, yet its adoption in orchard environments remains limited due to unstructured conditions, variable lighting, and difficulties in fruit recognition and manipulation. This study presents an improved robotic fruit harvesting system, Orchard roBot (OrBot), developed by the Robotics Vision Lab at Northwest Nazarene University, with the goal of advancing autonomous apple harvesting applications. The updated OrBot platform integrates a dual-camera vision system consisting of an eye-to-hand stereo camera with a wide field of view for fruit detection and an eye-in-hand RGB-D camera for precise manipulation. The control architecture was redesigned using Robot Operating System 2 (ROS2) and Python, enabling modular subsystem development and coordination. Fruit detection was performed using a YOLOv5 deep learning model, and visual servoing was employed to guide the robotic manipulator toward the target fruit. System performance was evaluated through laboratory experiments using artificial trees and field tests conducted in a commercial apple orchard in Idaho. OrBot achieved a 100% harvesting success rate in indoor tests and a 75–80% success rate in outdoor orchard conditions. Experimental results demonstrate that the dual-camera approach significantly enhances fruit search efficiency and harvesting efficiency. Identified limitations include sensitivity to lighting conditions, end effector performance with varying fruit sizes, and depth estimation errors. Overall, the results indicate a positive potential toward effective robotic fruit harvesting and highlight key areas for future improvement in vision, manipulation, and system robustness. Full article
Show Figures

Figure 1

13 pages, 2375 KB  
Opinion
CsPbI3 Perovskites at the Edge of Commercialization: Persistent Barriers, Multidisciplinary Solutions, and the Emerging Role of AI
by Carlo Spampinato
J 2026, 9(2), 12; https://doi.org/10.3390/j9020012 - 13 Apr 2026
Viewed by 384
Abstract
All-inorganic cesium lead iodide (CsPbI3) has been investigated for more than a decade as an absorber for perovskite photovoltaics thanks to its attractive bandgap, thermal robustness compared with hybrid perovskites, and compatibility with tandem concepts. Yet, despite remarkable efficiency progress, CsPbI [...] Read more.
All-inorganic cesium lead iodide (CsPbI3) has been investigated for more than a decade as an absorber for perovskite photovoltaics thanks to its attractive bandgap, thermal robustness compared with hybrid perovskites, and compatibility with tandem concepts. Yet, despite remarkable efficiency progress, CsPbI3 remains far from widespread commercialization. The core roadblock is the metastability of the photoactive black perovskite phases (α/γ/β) against transformation to the photoinactive yellow δ-phase under realistic conditions, amplified by defect chemistry, ion migration, and interfacial reactions. Additional barriers arise from scale-up constraints (film uniformity, throughput, solvent management), long-term operational stability (humidity, heat, UV, bias), and environmental/safety requirements, especially lead containment, sequestration, and end-of-life strategies. This review critically analyzes the intertwined physical, chemical, and engineering factors that still limit CsPbI3 deployment, with emphasis on how solutions in one domain can fail without co-design in others. This review summarizes state-of-the-art stabilization strategies (size/strain engineering, additive/doping routes, surface/interface passivation, and encapsulation), highlight scalable manufacturing pathways including solvent-minimized and vacuum-assisted approaches, and discuss lead-mitigation technologies such as Pb-adsorbing functional layers. Finally, I argue that artificial intelligence (AI)—from machine-learning stability models to process monitoring, robotic optimization, and digital twins—has become essential to navigate the enormous parameter space of CsPbI3 materials and manufacturing. It concludes with actionable recommendations and future directions toward bankable, scalable, and sustainable CsPbI3 photovoltaics. Full article
(This article belongs to the Section Chemistry & Material Sciences)
Show Figures

Figure 1

27 pages, 12290 KB  
Review
Ground-Based Electromagnetic Methods for the Monitoring and Surveillance of Urban and Engineering Infrastructures: State-of-the-Art and Future Directions
by Vincenzo Cuomo, Jean Dumoulin, Vincenzo Lapenna and Francesco Soldovieri
Sustainability 2026, 18(8), 3822; https://doi.org/10.3390/su18083822 - 13 Apr 2026
Viewed by 530
Abstract
This review focuses on electromagnetic imaging methods widely used in urban geophysics and civil engineering. The rapid growth of the urban population and the increase in the frequency of extreme events related to climate change make novel approaches to the geophysical monitoring of [...] Read more.
This review focuses on electromagnetic imaging methods widely used in urban geophysics and civil engineering. The rapid growth of the urban population and the increase in the frequency of extreme events related to climate change make novel approaches to the geophysical monitoring of urban areas and civil infrastructures essential in the context of programs for the sustainability and resilience of cities. In this scenario, there is a growing interest in using ground-based electromagnetic methods to investigate strategic infrastructures such as bridges, tunnels, dam embankments, power plants, energy plants and pipelines in a non-invasive way. The development of cost-effective, user-friendly sensor arrays, robust methodologies for tomographic data inversion, and AI-based and machine learning techniques has rapidly transformed these methods. This review critically analyzes the results relating to the application of ground-based electromagnetic methods in infrastructure monitoring and surveillance over the past 20 years by presenting a selection of best practice examples and studies planned to support programs for the resilience and maintenance of engineering infrastructures. The analysis reveals that these methods are highly effective in addressing a broad spectrum of monitoring issues in view of effective maintenance of civil infrastructures. In fact, these methods are essential for detecting the geometry of buried objects (e.g., bars and voids), enabling the early detection of degradation phenomena, and mapping water infiltration processes inside structures, as well as many other challenging applications. Finally, prospectives for development are identified in terms of using soft robot technologies, miniaturized sensors, and AI-based methods to acquire, process and interpret data as well as to design smart operational guidelines for infrastructure management. Full article
Show Figures

Figure 1

38 pages, 5277 KB  
Review
Artificial Intelligence in Pulmonary Endoscopy: Current Evidence, Limitations, and Future Directions
by Sara Lopes, Miguel Mascarenhas, João Fonseca and Adelino F. Leite-Moreira
J. Imaging 2026, 12(4), 167; https://doi.org/10.3390/jimaging12040167 - 12 Apr 2026
Viewed by 263
Abstract
Background: Artificial intelligence (AI) is increasingly applied in pulmonary endoscopy, including diagnostic bronchoscopy, interventional pulmonology and endobronchial imaging. Advances in computer vision, machine learning and robotic systems have expanded the potential for automated lesion detection, navigation to peripheral pulmonary lesions, and real-time procedural [...] Read more.
Background: Artificial intelligence (AI) is increasingly applied in pulmonary endoscopy, including diagnostic bronchoscopy, interventional pulmonology and endobronchial imaging. Advances in computer vision, machine learning and robotic systems have expanded the potential for automated lesion detection, navigation to peripheral pulmonary lesions, and real-time procedural support. However, the current evidence base remains heterogeneous, and translational challenges persist. Methods: This review summarizes current applications and developments of AI across white-light bronchoscopy (WLB), image-enhanced bronchoscopy (e.g., narrow-band imaging and autofluorescence imaging), endobronchial ultrasound (EBUS), virtual and robotic bronchoscopies, and workflow optimization and training. The authors also examine the methodological limitations, regulatory considerations, and implementation barriers that affect translation into routine practice. Results: Reported developments include deep learning-based models for mucosal abnormality detection, lymph-node characterization during EBUS-guided transbronchial needle aspiration (EBUS-TBNA), improved lesion localization, and reduction in operator-dependent variability. Additionally, AI-assisted simulation platforms and decision-support tools are reshaping training paradigms. Nevertheless, most studies remain retrospective or single-center, with limited external validation, dataset heterogeneity, unclear model explainability, and incomplete integration into clinical workflows. Conclusions: AI has the potential to support lesion detection, navigation, and training in pulmonary endoscopy. However, robust prospective validation, standardized datasets, transparent model reporting, robust data governance, multidisciplinary collaboration, and careful integration into clinical practice are required before widespread adoption. Full article
(This article belongs to the Section AI in Imaging)
Show Figures

Figure 1

26 pages, 8254 KB  
Article
Reconfigurable Compliant Joints (RCJs) for Functional Biomimicry in Assistive Devices and Wearable Robotic Systems
by Vanessa Young, Connor Talley, Sabrina Scarpinato, Gregory Sawicki and Ayse Tekes
Machines 2026, 14(4), 427; https://doi.org/10.3390/machines14040427 - 11 Apr 2026
Viewed by 389
Abstract
Compliant mechanisms have contributed to many advances in soft robotics, and there is strong motivation to translate these ideas to assistive devices where adaptive motion at the human interface is required. This work presents novel reconfigurable compliant joints (RCJs) as a parameterized joint [...] Read more.
Compliant mechanisms have contributed to many advances in soft robotics, and there is strong motivation to translate these ideas to assistive devices where adaptive motion at the human interface is required. This work presents novel reconfigurable compliant joints (RCJs) as a parameterized joint element for functional biomimicry in lower-extremity joints for prosthetic knees and ankle–foot orthoses, with concepts that extend to other limb joints. The RCJ uses a rigid hub and outer ring joined by an array of flexible links with centerlines defined by cubic Bézier curves. Link shapes are organized into four Bézier classes (A–D), with base types using 10, 12, or 14 uniformly distributed link slots and variants generated by modifying active-link count and distribution, forming a structured morphology space of 12 configurations for machine design. Dual-extrusion 3D-printed prototypes are characterized by a custom testing apparatus using a 2.2 kN load cell at 25 mm/s over a 0–90° rotation range across six recorded load cycles to measure torque–angle curves and stiffness under large deformations. Angle-dependent stiffness is evaluated over three fixed intervals (0–30°, 30–60°, and 60–90°) to quantify multi-stage behavior. A 2-dimensional corotational frame model and a Simscape Multibody model, including a rolling-contact knee configuration, use the same parameterization to relate geometry, nonlinear mechanics, and system-level motion. Experiments and simulations show multi-stage torque–angle profiles and predictable stiffness modulation across all configurations, with both magnitude and transition angle tunable through Bézier class and active-link distribution, positioning the RCJ as a CAD/CAE-compatible joint architecture for assistive devices or wearable robotic systems and a basis for advancing functional biomimicry in compliant mechanism design. Full article
(This article belongs to the Special Issue Recent Advances in Compliant Mechanisms)
Show Figures

Figure 1

49 pages, 675 KB  
Review
Automated Assembly of Large-Scale Aerospace Components: A Structured Narrative Survey of Emerging Technologies
by Kuai Zhou, Wenmin Chu, Peng Zhao, Xiaoxu Ji and Lulu Huang
Sensors 2026, 26(8), 2294; https://doi.org/10.3390/s26082294 - 8 Apr 2026
Viewed by 531
Abstract
Large-scale aerospace components (e.g., wings, fuselage sections, wing boxes, and rocket segments) feature large dimensions, low stiffness, complex interfaces, and strict assembly tolerances. Traditional rigid tooling and manual alignment struggle to meet the demands of high precision, efficiency, and flexibility in modern aerospace [...] Read more.
Large-scale aerospace components (e.g., wings, fuselage sections, wing boxes, and rocket segments) feature large dimensions, low stiffness, complex interfaces, and strict assembly tolerances. Traditional rigid tooling and manual alignment struggle to meet the demands of high precision, efficiency, and flexibility in modern aerospace manufacturing. This paper presents a structured literature review on the automated assembly of large-scale aerospace components, summarizing advances in three core domains: pose adjustment and positioning mechanisms, digital measurement technologies, and trajectory planning and control. Particular emphasis is placed on two cross-cutting themes: measurement uncertainty analysis and flexible assembly, which are critical for high-quality docking. The review classifies pose adjustment mechanisms into four categories (NC positioners, parallel kinematic machines, industrial robots, and novel mechanisms) and digital measurement into five branches (vision metrology, large-scale metrology, measurement field construction, uncertainty analysis, and auxiliary techniques). It also outlines five trajectory planning and control routes, covering traditional methods, multi-sensor fusion, digital twins, flexible assembly, and emerging intelligent approaches. The analysis reveals that current research suffers from fragmentation among mechanism design, metrology, and control, with insufficient integration of uncertainty propagation and flexible deformation modeling. Future systems will rely on heterogeneous equipment collaboration, uncertainty-aware closed-loop control, high-fidelity flexible modeling, and digital twin-driven decision-making. This review provides a unified framework and a technical reference for developing reliable, flexible, and scalable automated assembly systems for next-generation aerospace structures. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

19 pages, 3273 KB  
Article
A Comprehensive Analysis of Human–Machine Interaction: Teaching Pendant vs. Gesture Control in Industrial Robotics
by Robert Kristof, Valentin Ciupe, Erwin-Christian Lovasz and Ghadeer Ismael
Actuators 2026, 15(4), 210; https://doi.org/10.3390/act15040210 - 8 Apr 2026
Viewed by 352
Abstract
In collaborative robotics, efficiency and user experience play a central role. This study looks at how perceived performance differs from measured performance when comparing two ways of controlling industrial robots: traditional teaching pendants and wearable EMG-based gesture control. A Myo Armband was used [...] Read more.
In collaborative robotics, efficiency and user experience play a central role. This study looks at how perceived performance differs from measured performance when comparing two ways of controlling industrial robots: traditional teaching pendants and wearable EMG-based gesture control. A Myo Armband was used as an accessible 8-channel EMG platform, and three experiments were carried out on a Universal Robots UR10e to test pick-and-place tasks and precision positioning. Time and accuracy data were gathered together with blind feedback from 13 participants through a multi-criteria analysis framework. Even though the teaching pendant turned out to be more accurate in every scenario, 85% of participants still rated gesture control higher in overall satisfaction. These results point to a notable gap between what users perceive and how they actually perform and suggest that user experience deserves more weight in the design of future robot control interfaces. Full article
(This article belongs to the Special Issue Actuation and Sensing of Intelligent Soft Robots—2nd Edition)
Show Figures

Figure 1

30 pages, 28721 KB  
Article
Dual-Arm Robotic Textile Unfolding with Depth-Corrected Perception and Fold Resolution
by Tilla Egerhei Båserud, Joakim Johansen, Ajit Jha and Ilya Tyapin
Robotics 2026, 15(4), 78; https://doi.org/10.3390/robotics15040078 - 8 Apr 2026
Viewed by 502
Abstract
Reliable textile recycling requires automated unfolding to expose hidden hard components such as zippers, buttons, and metal fasteners, which otherwise risk damaging machinery and compromising downstream processes. This paper presents the design and implementation of an automated textile unfolding system based on a [...] Read more.
Reliable textile recycling requires automated unfolding to expose hidden hard components such as zippers, buttons, and metal fasteners, which otherwise risk damaging machinery and compromising downstream processes. This paper presents the design and implementation of an automated textile unfolding system based on a dual-arm robotic manipulation framework. The system uses two Interbotix WidowX 250s 6-DoF robotic arms and an Intel RealSense L515 LiDAR camera for visual perception. The unfolding process consists of three stages: initial dual-arm stretching to reduce major folds, refinement through a second stretch targeting the lower region, and a machine-learning stage that employs a YOLOv11 framework trained on depth-encoded textile images, followed by a depth-gradient-based estimator for fold direction. The system applies an extremity-based grasping strategy that selects leftmost and rightmost textile points from a custom error-corrected depth map, enabling robust grasp point selection, and a fold direction estimation method based on depth gradients around the detected fold. The most confident fold region is selected, an unfolding direction is determined using depth ranking, and the textile is manipulated until a flat state is confirmed through depth uniformity. Experiments show that depth correction significantly reduces spatial error in the robot frame, while segmentation and extremity detection achieve high accuracy across varied fold configurations, and the YOLOv11n-based model reaches 98.8% classification accuracy, while fold direction is estimated correctly in 87% of test cases. By enabling robust, largely autonomous textile unfolding, the system demonstrates a practical approach that could support safer and more efficient automated textile recycling workflows. Full article
(This article belongs to the Section Sensors and Control in Robotics)
Show Figures

Figure 1

Back to TopTop