Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (180)

Search Parameters:
Keywords = robotic grasping detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 8570 KB  
Article
Enhancing Robotic Grasping Detection Using Visual–Tactile Fusion Perception
by Dongyuan Zheng and Yahong Chen
Sensors 2026, 26(2), 724; https://doi.org/10.3390/s26020724 - 21 Jan 2026
Abstract
With the advancement of tactile sensors, researchers increasingly integrate tactile perception into robotics, but only for tasks such as object reconstruction, classification, recognition, and grasp state assessment. In this paper, we rethink the relationship between visual and tactile perception and propose a novel [...] Read more.
With the advancement of tactile sensors, researchers increasingly integrate tactile perception into robotics, but only for tasks such as object reconstruction, classification, recognition, and grasp state assessment. In this paper, we rethink the relationship between visual and tactile perception and propose a novel robotic grasping detection method based on visual–tactile perception. Initially, we construct a visual–tactile dataset containing the grasp stability for each potential grasping position. Next, we introduce a novel Grasp Stability Prediction Module (GSPM) to generate a grasp stability probability map, providing prior knowledge regarding grasp stability to the grasp detection network for each possible grasp position. Finally, the map is multiplied element-wise with the corresponding colored image and inputted into the grasp detection network. Experimental results demonstrate that our novel visual–tactile fusion method significantly enhances robotic grasping detection accuracy. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

28 pages, 26208 KB  
Article
Real-Time Target-Oriented Grasping Framework for Resource-Constrained Robots
by Dongxiao Han, Haorong Li, Yuwen Li and Shuai Chen
Sensors 2026, 26(2), 645; https://doi.org/10.3390/s26020645 - 18 Jan 2026
Viewed by 75
Abstract
Target-oriented grasping has become increasingly important in household and industrial environments, and deploying such systems on mobile robots is particularly challenging due to limited computational resources. To address these limitations, we present an efficient framework for real-time target-oriented grasping on resource-constrained platforms, supporting [...] Read more.
Target-oriented grasping has become increasingly important in household and industrial environments, and deploying such systems on mobile robots is particularly challenging due to limited computational resources. To address these limitations, we present an efficient framework for real-time target-oriented grasping on resource-constrained platforms, supporting both click-based grasping for unknown objects and category-based grasping for known objects. To reduce model complexity while maintaining detection accuracy, YOLOv8 is compressed using a structured pruning method. For grasp pose generation, a pretrained GR-ConvNetv2 predicts candidate grasps, which are restricted to the target object using masks generated by MobileSAMv2. A geometry-based correction module then adjusts the position, angle, and width of the initial grasp poses to improve grasp accuracy. Finally, extensive experiments were carried out on the Cornell and Jacquard datasets, as well as in real-world single-object, cluttered, and stacked scenarios. The proposed framework achieves grasp success rates of 98.8% on the Cornell dataset and 95.8% on the Jacquard dataset, with over 90% success in real-world single-object and cluttered settings, while maintaining real-time performance of 67 ms and 75 ms per frame in the click-based and category-specified modes, respectively. These experiments demonstrate that the proposed framework achieves high grasping accuracy and robust performance, with a efficient design that enables deployment on mobile and resource-constrained robots. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

29 pages, 4242 KB  
Article
Electro-Actuated Customizable Stacked Fin Ray Gripper for Adaptive Object Handling
by Ratchatin Chancharoen, Kantawatchr Chaiprabha, Worathris Chungsangsatiporn, Pimolkan Piankitrungreang, Supatpromrungsee Saetia, Tanarawin Viravan and Gridsada Phanomchoeng
Actuators 2026, 15(1), 52; https://doi.org/10.3390/act15010052 - 13 Jan 2026
Viewed by 128
Abstract
Soft robotic grippers provide compliant and adaptive manipulation, but most existing designs address actuation speed, adaptability, modularity, or sensing individually rather than in combination. This paper presents an electro-actuated customizable stacked Fin Ray gripper that integrates these capabilities within a single design. The [...] Read more.
Soft robotic grippers provide compliant and adaptive manipulation, but most existing designs address actuation speed, adaptability, modularity, or sensing individually rather than in combination. This paper presents an electro-actuated customizable stacked Fin Ray gripper that integrates these capabilities within a single design. The gripper employs a compact solenoid for fast grasping, multiple vertically stacked Fin Ray segments for improved 3D conformity, and interchangeable silicone or TPU fins that can be tuned for task-specific stiffness and geometry. In addition, a light-guided, vision-based sensing approach is introduced to capture deformation without embedded sensors. Experimental studies—including free-fall object capture and optical shape sensing—demonstrate rapid solenoid-driven actuation, adaptive grasping behavior, and clear visual detectability of fin deformation. Complementary simulations using Cosserat-rod modeling and bond-graph analysis characterize the deformation mechanics and force response. Overall, the proposed gripper provides a practical soft-robotic solution that combines speed, adaptability, modular construction, and straightforward sensing for diverse object-handling scenarios. Full article
(This article belongs to the Special Issue Soft Actuators and Robotics—2nd Edition)
Show Figures

Figure 1

20 pages, 4633 KB  
Article
Teleoperation System for Service Robots Using a Virtual Reality Headset and 3D Pose Estimation
by Tiago Ribeiro, Eduardo Fernandes, António Ribeiro, Carolina Lopes, Fernando Ribeiro and Gil Lopes
Sensors 2026, 26(2), 471; https://doi.org/10.3390/s26020471 - 10 Jan 2026
Viewed by 262
Abstract
This paper presents an immersive teleoperation framework for service robots that combines real-time 3D human pose estimation with a Virtual Reality (VR) interface to support intuitive, natural robot control. The operator is tracked using MediaPipe for 2D landmark detection and an Intel RealSense [...] Read more.
This paper presents an immersive teleoperation framework for service robots that combines real-time 3D human pose estimation with a Virtual Reality (VR) interface to support intuitive, natural robot control. The operator is tracked using MediaPipe for 2D landmark detection and an Intel RealSense D455 RGB-D (Red-Green-Blue plus Depth) camera for depth acquisition, enabling 3D reconstruction of key joints. Joint angles are computed using efficient vector operations and mapped to the kinematic constraints of an anthropomorphic arm on the CHARMIE service robot. A VR-based telepresence interface provides stereoscopic video and head-motion-based view control to improve situational awareness during manipulation tasks. Experiments in real-world object grasping demonstrate reliable arm teleoperation and effective telepresence; however, vision-only estimation remains limited for axial rotations (e.g., elbow and wrist yaw), particularly under occlusions and unfavorable viewpoints. The proposed system provides a practical pathway toward low-cost, sensor-driven, immersive human–robot interaction for service robotics in dynamic environments. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

23 pages, 17893 KB  
Article
Multimodal Control of Manipulators: Coupling Kinematics and Vision for Self-Driving Laboratory Operations
by Shifa Sulaiman, Amarnath Harikumar, Simon Bøgh and Naresh Marturi
Robotics 2026, 15(1), 17; https://doi.org/10.3390/robotics15010017 - 9 Jan 2026
Viewed by 213
Abstract
Autonomous experimental platforms increasingly rely on robust, vision-guided robotic manipulation to support reliable and repeatable laboratory operations. This work presents a modular motion-execution subsystem designed for integration into self-driving laboratory (SDL) workflows, focusing on the coupling of real-time visual perception with smooth and [...] Read more.
Autonomous experimental platforms increasingly rely on robust, vision-guided robotic manipulation to support reliable and repeatable laboratory operations. This work presents a modular motion-execution subsystem designed for integration into self-driving laboratory (SDL) workflows, focusing on the coupling of real-time visual perception with smooth and stable manipulator control. The framework enables autonomous detection, tracking, and interaction with textured objects through a hybrid scheme that couples advanced motion planning algorithms with real-time visual feedback. Kinematic analysis of the manipulator is performed using the screw theory formulations, which provide a rigorous foundation for deriving forward kinematics and the space Jacobian. These formulations are further employed to compute inverse kinematic solutions via the Damped Least Squares (DLS) method, ensuring stable and continuous joint trajectories even in the presence of redundancy and singularities. Motion trajectories toward target objects are generated using the RRT* algorithm, offering optimal path planning under dynamic constraints. Object pose estimation is achieved through a a vision workflow integrating feature-driven detection and homography-guided depth analysis, enabling adaptive tracking and dynamic grasping of textured objects. The manipulator’s performance is quantitatively evaluated using smoothness metrics, RMSE pose errors, and joint motion profiles, including velocity continuity, acceleration, jerk, and snap. Simulation results demonstrate that the proposed subsystem delivers stable, smooth, and reproducible motion execution, establishing a validated baseline for the manipulation layer of next-generation SDL architectures. Full article
(This article belongs to the Special Issue Visual Servoing-Based Robotic Manipulation)
Show Figures

Figure 1

16 pages, 5236 KB  
Article
Intelligent Disassembly System for PCB Components Integrating Multimodal Large Language Model and Multi-Agent Framework
by Li Wang, Liu Ouyang, Huiying Weng, Xiang Chen, Anna Wang and Kexin Zhang
Processes 2026, 14(2), 227; https://doi.org/10.3390/pr14020227 - 8 Jan 2026
Viewed by 225
Abstract
The escalating volume of waste electrical and electronic equipment (WEEE) poses a significant global environmental challenge. The disassembly of printed circuit boards (PCBs), a critical step for resource recovery, remains inefficient due to limitations in the adaptability and dexterity of existing automated systems. [...] Read more.
The escalating volume of waste electrical and electronic equipment (WEEE) poses a significant global environmental challenge. The disassembly of printed circuit boards (PCBs), a critical step for resource recovery, remains inefficient due to limitations in the adaptability and dexterity of existing automated systems. This paper proposes an intelligent disassembly system for PCB components that integrates a multimodal large language model (MLLM) with a multi-agent framework. The MLLM serves as the system’s cognitive core, enabling high-level visual-language understanding and task planning by converting images into semantic descriptions and generating disassembly strategies. A state-of-the-art object detection algorithm (YOLOv13) is incorporated to provide fine-grained component localization. This high-level intelligence is seamlessly connected to low-level execution through a multi-agent framework that orchestrates collaborative dual robotic arms. One arm controls a heater for precise solder melting, while the other performs fine “probing-grasping” actions guided by real-time force feedback. Experiments were conducted on 30 decommissioned smart electricity meter PCBs, evaluating the system on recognition rate, capture rate, melting rate, and time consumption for seven component types. Results demonstrate that the system achieved a 100% melting rate across all components and high recognition rates (90–100%), validating its strengths in perception and thermal control. However, the capture rate varied significantly, highlighting the grasping of small, low-profile components as the primary bottleneck. This research presents a significant step towards autonomous, non-destructive e-waste recycling by effectively combining high-level cognitive intelligence with low-level robotic control, while also clearly identifying key areas for future improvement. Full article
Show Figures

Figure 1

33 pages, 14779 KB  
Article
A Vision-Based Robot System with Grasping-Cutting Strategy for Mango Harvesting
by Qianling Liu and Zhiheng Lu
Agriculture 2026, 16(1), 132; https://doi.org/10.3390/agriculture16010132 - 4 Jan 2026
Viewed by 402
Abstract
Mango is the second most widely cultivated tropical fruit in the world. Its harvesting mainly relies on manual labor. During the harvest season, the hot weather leads to low working efficiency and high labor costs. Current research on automatic mango harvesting mainly focuses [...] Read more.
Mango is the second most widely cultivated tropical fruit in the world. Its harvesting mainly relies on manual labor. During the harvest season, the hot weather leads to low working efficiency and high labor costs. Current research on automatic mango harvesting mainly focuses on locating the fruit stem harvesting point, followed by stem clamping and cutting. However, these methods are less effective when the stem is occluded. To address these issues, this study first acquires images of four mango varieties in a mixed cultivation orchard and builds a dataset. Mango detection and occlusion-state classification models are then established based on YOLOv11m and YOLOv8l-cls, respectively. The detection model achieves an AP0.5–0.95 (average precision at IoU = 0.50:0.05:0.95) of 90.21%, and the accuracy of the classification model is 96.9%. Second, based on the mango growth characteristics, detected mango bounding boxes and binocular vision, we propose a spatial localization method for the mango grasping point. Building on this, a mango-grasping and stem-cutting end-effector is designed. Finally, a mango harvesting robot system is developed, and verification experiments are carried out. The experimental results show that the harvesting method and procedure are well-suited for situations where the fruit stem is occluded, as well as for fruits with no occlusion or partial occlusion. The mango grasping success rate reaches 96.74%, the stem cutting success rate is 91.30%, and the fruit injury rate is less than 5%. The average image processing time is 119.4 ms. The results prove the feasibility of the proposed methods. Full article
Show Figures

Figure 1

27 pages, 5707 KB  
Review
Design and Sensing Frameworks of Soft Octopus-Inspired Grippers Toward Artificial Intelligence
by Seunghoon Choi, Junwon Jang, Junho Lee and Da Wan Kim
Biomimetics 2025, 10(12), 813; https://doi.org/10.3390/biomimetics10120813 - 4 Dec 2025
Viewed by 939
Abstract
Soft robotics provides compliance, safe interaction, and adaptability that rigid systems cannot easily achieve. The octopus offers a powerful biological model, combining reversible suction adhesion, continuum arm motion, and reliable performance in wet environments. This review examines recent octopus-inspired soft grippers through three [...] Read more.
Soft robotics provides compliance, safe interaction, and adaptability that rigid systems cannot easily achieve. The octopus offers a powerful biological model, combining reversible suction adhesion, continuum arm motion, and reliable performance in wet environments. This review examines recent octopus-inspired soft grippers through three functional dimensions: structural and sensing devices, control strategies, and AI-driven applications. We summarize suction-cup geometries, tentacle-like actuators, and hybrid structures, together with optical, triboelectric, ionic, and deformation-based sensing modules for contact detection, force estimation, and material recognition. We then discuss control frameworks that regulate suction engagement, arm curvature, and feedback-based grasp adjustment. Finally, we outline AI-assisted and neuromorphic-oriented approaches that use event-driven sensing and distributed, spike-inspired processing to support adaptive and energy-conscious decision-making. By integrating developments across structure, sensing, control, and computation, this review describes how octopus-inspired grippers are advancing from morphology-focused designs toward perception-enabled and computation-aware robotic platforms. Full article
(This article belongs to the Special Issue Bioinspired Engineered Systems)
Show Figures

Graphical abstract

14 pages, 4161 KB  
Article
Diffusion-Plating Al2O3 Film for Friction and Corrosion Protection of Marine Sensors
by Yaoyao Liu, Longbo Li, Daling Wei, Kangwei Xu, Liangliang Liu, Long Li and Zhongzhen Wu
Micromachines 2025, 16(12), 1344; https://doi.org/10.3390/mi16121344 - 28 Nov 2025
Viewed by 375
Abstract
To extend the service life of sensors in seawater, this work prepared an integrated diffusion-plated Al2O3 film using high-power impulse magnetron sputtering (HiPIMS). The tribological properties of the Al2O3 film in a marine environment were tested using [...] Read more.
To extend the service life of sensors in seawater, this work prepared an integrated diffusion-plated Al2O3 film using high-power impulse magnetron sputtering (HiPIMS). The tribological properties of the Al2O3 film in a marine environment were tested using a tribometer. The morphology and evolution of the Al2O3 film before and after the friction tests were investigated by characterization techniques such as field emission scanning electron microscopy (FESEM). The results demonstrate that the Al2O3 film exhibits excellent tribological performance in the marine environment, significantly enhancing the wear resistance of the substrate material. Furthermore, with the protection of the Al2O3 film, the designed pressure sensor achieved high-sensitivity detection of minute operational forces underwater. When applied to a robotic gripper for manipulation tasks, the coated underwater sensor enabled accurate perception of subtle motion states of the grasped objects. Full article
(This article belongs to the Special Issue Micro-Energy Harvesting Technologies and Self-Powered Sensing Systems)
Show Figures

Figure 1

26 pages, 8108 KB  
Article
A Multi-Step Grasping Framework for Zero-Shot Object Detection in Everyday Environments Based on Lightweight Foundational General Models
by Ruibo Li, Tie Zhang and Yanbiao Zou
Sensors 2025, 25(23), 7125; https://doi.org/10.3390/s25237125 - 21 Nov 2025
Viewed by 987
Abstract
Achieving object grasping in everyday environments by leveraging the powerful generalization capabilities of foundational general models while enhancing their deployment efficiency within robotic control systems represents a key challenge for service robots. To address the application environments and hardware resource constraints of household [...] Read more.
Achieving object grasping in everyday environments by leveraging the powerful generalization capabilities of foundational general models while enhancing their deployment efficiency within robotic control systems represents a key challenge for service robots. To address the application environments and hardware resource constraints of household robots, a Three-step Pipeline Grasping Framework (TPGF) is proposed for zero-shot object grasping. The framework operates on the principle of “object perception–object point cloud extraction–grasping pose determination” and requires no training or fine-tuning. We integrate advanced foundational models into the Object Perception Module (OPM) to maximize zero-shot generalization and develop a novel Point Cloud Extraction Method (PCEM) based on Depth Information Suppression (DIS) to enable targeted grasping from complex scenes. Furthermore, to significantly reduce hardware overhead and accelerate deployment, a Saturated Truncation strategy based on relative information entropy is introduced for high-precision quantization, resulting in the highly efficient model, EntQ-EdgeSAM. Experimental results on public datasets demonstrate the superior inspection generalization of the combined foundational models compared to task-specific baselines. The proposed Saturated Truncation strategy achieves 3–21% higher quantization accuracy than symmetric uniform quantization, leading to 3.5% model file compression and 95% faster inference speed for EntQ-EdgeSAM. Grasping experiments confirm that the TPGF achieves robust recognition accuracy and high grasping success rates in zero-shot object grasping tasks within replicated everyday environments, proving its practical value and efficiency for real-world robotic deployment. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

26 pages, 6312 KB  
Article
A Novel Telescopic Aerial Manipulator for Installing and Grasping the Insulator Inspection Robot on Power Lines: Design, Control, and Experiment
by Peng Yang, Hao Wang, Xiuwei Huang, Jiawei Gu, Tao Deng and Zonghui Yuan
Drones 2025, 9(11), 741; https://doi.org/10.3390/drones9110741 - 24 Oct 2025
Viewed by 970
Abstract
Insulators on power lines require regular maintenance by operators in high-altitude hazardous environments, and the emergence of aerial manipulators provides an efficient and safe support for this scenario. In this study, a lightweight telescopic aerial manipulator system is developed, which can realize the [...] Read more.
Insulators on power lines require regular maintenance by operators in high-altitude hazardous environments, and the emergence of aerial manipulators provides an efficient and safe support for this scenario. In this study, a lightweight telescopic aerial manipulator system is developed, which can realize the installation and retrieval of insulator inspection robots on power lines. The aerial manipulator has three degrees of freedom, including two telescopic scissor mechanisms and one pitch rotation mechanism. Multiple types of cameras and sensors are specifically configured in the structure, and the total mass of the structure is 2.2 kg. Next, the kinematic model, dynamic model, and instantaneous contact force model of the designed aerial manipulator are derived. Then, the hybrid position/force control strategy of the aerial manipulator and the visual detection and estimation algorithm are designed to complete the operation or complete the task. Finally, the lifting external load test, grasp and installation operation test, as well as outdoor flight operation test are carried out. The test results not only quantitatively evaluate the effectiveness of the structural design and control design of the system but also verify that the aerial manipulator can complete the accurate automatic grasp and installation operation of the 3.6 kg target device in outdoor flight. Full article
Show Figures

Figure 1

24 pages, 6113 KB  
Article
Vision-Based Reinforcement Learning for Robotic Grasping of Moving Objects on a Conveyor
by Yin Cao, Xuemei Xu and Yazheng Zhang
Machines 2025, 13(10), 973; https://doi.org/10.3390/machines13100973 - 21 Oct 2025
Viewed by 2119
Abstract
This study introduces an autonomous framework for grasping moving objects on a conveyor belt, enabling unsupervised detection, grasping, and categorization. The work focuses on two common object shapes—cylindrical cans and rectangular cartons—transported at a constant speed of 3–7 cm/s on the conveyor, emulating [...] Read more.
This study introduces an autonomous framework for grasping moving objects on a conveyor belt, enabling unsupervised detection, grasping, and categorization. The work focuses on two common object shapes—cylindrical cans and rectangular cartons—transported at a constant speed of 3–7 cm/s on the conveyor, emulating typical scenarios. The proposed framework combines a vision-based neural network for object detection, a target localization algorithm, and a deep reinforcement learning model for robotic control. Specifically, a YOLO-based neural network was employed to detect the 2D position of target objects. These positions are then converted to 3D coordinates, followed by pose estimation and error correction. A Proximal Policy Optimization (PPO) algorithm was then used to provide continuous control decisions for the robotic arm. A tailored reinforcement learning environment was developed using the Gymnasium interface. Training and validation were conducted on a 7-degree-of-freedom (7-DOF) robotic arm model in the PyBullet physics simulation engine. By leveraging transfer learning and curriculum learning strategies, the robotic agent effectively learned to grasp multiple categories of moving objects. Simulation experiments and randomized trials show that the proposed method enables the 7-DOF robotic arm to consistently grasp conveyor belt objects, achieving an approximately 80% success rate at conveyor speeds of 0.03–0.07 m/s. These results demonstrate the potential of the framework for deployment in automated handling applications. Full article
(This article belongs to the Special Issue AI-Integrated Advanced Robotics Towards Industry 5.0)
Show Figures

Figure 1

21 pages, 6219 KB  
Article
Model-Free Transformer Framework for 6-DoF Pose Estimation of Textureless Tableware Objects
by Jungwoo Lee, Hyogon Kim, Ji-Wook Kwon, Sung-Jo Yun, Na-Hyun Lee, Young-Ho Choi, Goobong Chung and Jinho Suh
Sensors 2025, 25(19), 6167; https://doi.org/10.3390/s25196167 - 5 Oct 2025
Viewed by 909
Abstract
Tableware objects such as plates, bowls, and cups are usually textureless, uniform in color, and vary widely in shape, making it difficult to apply conventional pose estimation methods that rely on texture cues or object-specific CAD models. These limitations present a significant obstacle [...] Read more.
Tableware objects such as plates, bowls, and cups are usually textureless, uniform in color, and vary widely in shape, making it difficult to apply conventional pose estimation methods that rely on texture cues or object-specific CAD models. These limitations present a significant obstacle to robotic manipulation in restaurant environments, where reliable six-degree-of-freedom (6-DoF) pose estimation is essential for autonomous grasping and collection. To address this problem, we propose a model-free and texture-free 6-DoF pose estimation framework based on a transformer encoder architecture. This method uses only geometry-based features extracted from depth images, including surface vertices and rim normals, which provide strong structural priors. The pipeline begins with object detection and segmentation using a pretrained video foundation model, followed by the generation of uniformly partitioned grids from depth data. For each grid cell, centroid positions, and surface normals are computed and processed by a transformer-based model that jointly predicts object rotation and translation. Experiments with ten types of tableware demonstrate that the method achieves an average rotational error of 3.53 degrees and a translational error of 13.56 mm. Real-world deployment on a mobile robot platform with a manipulator further validated its ability to autonomously recognize and collect tableware, highlighting the practicality of the proposed geometry-driven approach for service robotics. Full article
Show Figures

Figure 1

17 pages, 3936 KB  
Article
Markerless Force Estimation via SuperPoint-SIFT Fusion and Finite Element Analysis: A Sensorless Solution for Deformable Object Manipulation
by Qingqing Xu, Ruoyang Lai and Junqing Yin
Biomimetics 2025, 10(9), 600; https://doi.org/10.3390/biomimetics10090600 - 8 Sep 2025
Viewed by 807
Abstract
Contact-force perception is a critical component of safe robotic grasping. With the rapid advances in embodied intelligence technology, humanoid robots have enhanced their multimodal perception capabilities. Conventional force sensors face limitations, such as complex spatial arrangements, installation challenges at multiple nodes, and potential [...] Read more.
Contact-force perception is a critical component of safe robotic grasping. With the rapid advances in embodied intelligence technology, humanoid robots have enhanced their multimodal perception capabilities. Conventional force sensors face limitations, such as complex spatial arrangements, installation challenges at multiple nodes, and potential interference with robotic flexibility. Consequently, these conventional sensors are unsuitable for biomimetic robot requirements in object perception, natural interaction, and agile movement. Therefore, this study proposes a sensorless external force detection method that integrates SuperPoint-Scale Invariant Feature Transform (SIFT) feature extraction with finite element analysis to address force perception challenges. A visual analysis method based on the SuperPoint-SIFT feature fusion algorithm was implemented to reconstruct a three-dimensional displacement field of the target object. Subsequently, the displacement field was mapped to the contact force distribution using finite element modeling. Experimental results demonstrate a mean force estimation error of 7.60% (isotropic) and 8.15% (anisotropic), with RMSE < 8%, validated by flexible pressure sensors. To enhance the model’s reliability, a dual-channel video comparison framework was developed. By analyzing the consistency of the deformation patterns and mechanical responses between the actual compression and finite element simulation video keyframes, the proposed approach provides a novel solution for real-time force perception in robotic interactions. The proposed solution is suitable for applications such as precision assembly and medical robotics, where sensorless force feedback is crucial. Full article
(This article belongs to the Special Issue Bio-Inspired Intelligent Robot)
Show Figures

Figure 1

32 pages, 25342 KB  
Article
An End-to-End Computationally Lightweight Vision-Based Grasping System for Grocery Items
by Thanavin Mansakul, Gilbert Tang, Phil Webb, Jamie Rice, Daniel Oakley and James Fowler
Sensors 2025, 25(17), 5309; https://doi.org/10.3390/s25175309 - 26 Aug 2025
Viewed by 1550
Abstract
Vision-based grasping for mobile manipulators poses significant challenges in machine perception, computational efficiency, and real-world deployment. This study presents a computationally lightweight, end-to-end grasp detection framework that integrates object detection, object pose estimation, and grasp point prediction for a mobile manipulator equipped with [...] Read more.
Vision-based grasping for mobile manipulators poses significant challenges in machine perception, computational efficiency, and real-world deployment. This study presents a computationally lightweight, end-to-end grasp detection framework that integrates object detection, object pose estimation, and grasp point prediction for a mobile manipulator equipped with a parallel gripper. A transformation model is developed to map coordinates from the image frame to the robot frame, enabling accurate manipulation. To evaluate system performance, a benchmark and a dataset tailored to pick-and-pack grocery tasks are introduced. Experimental validation demonstrates an average execution time of under 5 s on an edge device, achieving a 100% success rate on Level 1 and 96% on Level 2 of the benchmark. Additionally, the system achieves an average compute-to-speed ratio of 0.0130, highlighting its energy efficiency. The proposed framework offers a practical, robust, and efficient solution for lightweight robotic applications in real-world environments. Full article
Show Figures

Figure 1

Back to TopTop