Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (20,557)

Search Parameters:
Keywords = robotization

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
39 pages, 2599 KB  
Article
A Cable-Driven Hybrid Robot with Series-Parallel Coupling: Design, Modeling, Optimization Analysis, and Trajectory Tracking
by Zhifu Xue, Zhiquan Yang, Junyi Hu, Bin Zhu and Jianqing Peng
Sensors 2026, 26(4), 1147; https://doi.org/10.3390/s26041147 - 10 Feb 2026
Abstract
Compared to purely serial robots or cable-driven parallel robots (CDPRs), cable-driven hybrid robots (CDHRs) combine the advantages of both, addressing their limitations and enabling the execution of complex tasks. The series-parallel coupling structure increases the complexity of the system, complicating modeling, calibration, and [...] Read more.
Compared to purely serial robots or cable-driven parallel robots (CDPRs), cable-driven hybrid robots (CDHRs) combine the advantages of both, addressing their limitations and enabling the execution of complex tasks. The series-parallel coupling structure increases the complexity of the system, complicating modeling, calibration, and force-closure workspace (FCW) analysis. This study develops a CDHR system equipped with various sensors and proposes methods for series-parallel coupling modeling, workspace analysis, and self-calibration of complex systems. First, the modular design requirements for the CDHR are analyzed, comprising an 8-cable parallel drive and a 4-degree-of-freedom serial manipulator. Second, a kinematic model of the CDHR with series-parallel coupling was derived, and the positions of the dynamic anchor seats were optimized using an optimization algorithm. Based on these optimized results, a modeling and analysis method for the statics and FCW is proposed. Furthermore, considering the complex and interdependent structural parameters of the system, a method for the self-calibration of the system parameters and trajectory planning for the CDHR is presented. Finally, experimental validation on both simulations and a physical prototype confirmed the effectiveness of the proposed methods. The developed prototype and the proposed method provide a basis for high-precision operations in large spaces, operations in dangerous/extreme environments, and automated operations in logistics/warehousing. Full article
25 pages, 12559 KB  
Article
Design and Implementation of a Low-Cost Perception System for Aerial Robots in Confined Spaces
by Susan Basnet, Jens Christian Andersen and Evangelos Boukas
Sensors 2026, 26(4), 1140; https://doi.org/10.3390/s26041140 - 10 Feb 2026
Abstract
Operating an aerial vehicle in a confined space, such as a vessel ballast tank, is a major challenge in terms of localization, perception, and control due to limited visibility, constrained maneuvering space, and the absence of reliable (if any) GNSS signals. This paper [...] Read more.
Operating an aerial vehicle in a confined space, such as a vessel ballast tank, is a major challenge in terms of localization, perception, and control due to limited visibility, constrained maneuvering space, and the absence of reliable (if any) GNSS signals. This paper addresses the design considerations for a quadcopter in confined spaces, focusing on a novel perception system using 12 VL53L8CX time-of-flight (ToF) sensors from STMicroelectronics. These sensors are used for enhanced perception and collision avoidance while flying in confined spaces, making them a suitable alternative to bulky LiDAR systems, reducing weight, cost, and required computational power. These sensors are placed strategically around the quadcopter to cover 360° radial view within a 4 m range. Experiments are conducted to test the reliability and repeatability of the integrated system, along with its synchronization feature. Furthermore, the applicability is verified by flying in confined and cluttered spaces, both in simulation and the real world. This design and study aims to establish a baseline for lightweight, compact, and safe navigation for small drones in confined and featureless environments. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

27 pages, 4306 KB  
Review
Endoscopic and Hybrid Approaches for Gastric Subepithelial Tumors: Expanding the Frontiers of Minimally Invasive Therapy
by Francesco Bombaci, Angelo Bruni, Michele Dota, Massimo Del Gaudio, Giuseppe Dell’Anna, Francesco Vito Mandarino, Francesco Azzolini, Emanuele Sinagra, Lorenzo Fuccio, Rocco Maurizio Zagari, Giovanni Barbara and Paolo Cecinato
Gastroenterol. Insights 2026, 17(1), 13; https://doi.org/10.3390/gastroent17010013 - 10 Feb 2026
Abstract
Per-oral flexible endoscopy has expanded minimally invasive options for the management of gastric subepithelial tumors (G-SETs). This narrative review appraises conventional and advanced endoscopic resections alongside hybrid laparoscopic–endoscopic procedures, within a size- and layer-based clinical framework. Endoscopic mucosal resection (EMR) and endoscopic submucosal [...] Read more.
Per-oral flexible endoscopy has expanded minimally invasive options for the management of gastric subepithelial tumors (G-SETs). This narrative review appraises conventional and advanced endoscopic resections alongside hybrid laparoscopic–endoscopic procedures, within a size- and layer-based clinical framework. Endoscopic mucosal resection (EMR) and endoscopic submucosal dissection (ESD) achieve high en bloc resection rates for small, intraluminal tumors arising from mucosa or submucosa. Traction strategies and dedicated traction devices may improve submucosal exposure, shorten procedure time, and reduce adverse events. Submucosal tunneling endoscopic resection (STER) has been developed to enucleate tumors originating from the muscularis propria while preserving mucosal integrity. However, tunnel creation and specimen retrieval become challenging for large tumors or for those located in the cardia or fundus. Endoscopic full-thickness resection (EFTR) enables controlled transmural excision of G-SETs arising from deeper wall layers. Exposed EFTR, combined with secure endoscopic closure, provides high en bloc and complete (R0) resection rates. Closure options range from through-the-scope clips—for small defects—to over-the-scope clips, endoloop-clip purse-string methods, reopenable-clip over-the-line techniques and endoscopic suturing systems—for larger defects. Non-exposed EFTR and device-assisted systems reduce the risk of peritoneal contamination, although complete resection rates are more variable. Hybrid approaches, including classical laparoscopic–endoscopic cooperative surgery (LECS) and non-exposure variants, combine endoscopic precision with the safety and closure capabilities of laparoscopic surgery, minimizing the amount of resected gastric wall. They are particularly suited to larger, awkwardly located or ulcerated G-SETs. Emerging traction platforms, flexible robotic systems, and AI-based tools may further broaden the role of per-oral flexible endoscopy for the treatment of G-SETs. However, evidence remains preliminary, and surgery continues to play a key role for large, extraluminal or anatomically prohibitive G-SETs. Full article
(This article belongs to the Collection Advances in Gastrointestinal Cancer)
Show Figures

Figure 1

49 pages, 1517 KB  
Article
Enhanced Rotating Machinery Fault Diagnosis Using Hybrid RBSO–MRFO Adaptive Transformer-LSTM for Binary and Multi-Class Classification
by Amir R. Ali and Hossam Kamal
Machines 2026, 14(2), 208; https://doi.org/10.3390/machines14020208 - 10 Feb 2026
Abstract
Accurate fault diagnosis in rotating machinery is critical for predictive maintenance and operational reliability in industrial applications. Despite the effectiveness of deep learning, many models underperform due to manually selected hyperparameters, which can lead to premature convergence, overfitting, weak generalization, and inconsistent performance [...] Read more.
Accurate fault diagnosis in rotating machinery is critical for predictive maintenance and operational reliability in industrial applications. Despite the effectiveness of deep learning, many models underperform due to manually selected hyperparameters, which can lead to premature convergence, overfitting, weak generalization, and inconsistent performance across binary and multi-class classification. To address these limitations, the study proposes a novel hybrid hyperparameter optimization framework that combines Robotic Brain Storm Optimization (RBSO) with Manta Ray Foraging Optimization (MRFO) to optimally fine-tune deep learning architectures, including MLP, LSTM, GRU-TCN, CNN-BiLSTM, and Transformer-LSTM models. The framework leverages RBSO for global search to promote diversity and prevent premature convergence, and MRFO for local search to enhance convergence toward optimal solutions, with their combined effect improving predictive model performance and methodological generalization. The approach was validated on three benchmark datasets, including Case Western Reserve University (CWRU), industrial machine fault detection (TMFD), and the Machinery Fault Dataset (MaFaulDa). Before optimization, Transformer-LSTM model achieved 98.35% and 97.21% accuracy on CWRU binary and multi-class classification, 99.52% and 98.57% on TMFD, and 98.18% and 92.82% on MaFaulDa. Following hybrid optimization, Transformer-LSTM exhibited superior performance, with accuracies increasing to 99.72% for both CWRU tasks, 99.97% for TMFD, and 99.98% and 98.60% for MaFaulDa, substantially reducing misclassification. These results demonstrate that the proposed RBSO–MRFO framework provides a scalable, robust, and high-accuracy solution for intelligent fault diagnosis in rotating machinery. Full article
(This article belongs to the Section Machines Testing and Maintenance)
18 pages, 1017 KB  
Article
Robotic Total Knee Replacement: Single-Centre, Prospective, Non-Randomised Comparative Study Comparing Restricted Kinematic Alignment Combined with a Load Sensor Versus Functional Alignment
by César Tourtoulou, Julien Bardou-Jacquet, François Blaquière, Nicolas Pommier, Pierre Laumonerie, Jérôme Murgier and Yohan Legallois
J. Clin. Med. 2026, 15(4), 1396; https://doi.org/10.3390/jcm15041396 - 10 Feb 2026
Abstract
Background: Total knee arthroplasty (TKA) is an effective procedure for symptomatic end-stage knee arthritis with good clinical and survivorship outcomes. However, up to 20% of patients report dissatisfaction following TKA. Recent studies have suggested that this may be at least partially due to [...] Read more.
Background: Total knee arthroplasty (TKA) is an effective procedure for symptomatic end-stage knee arthritis with good clinical and survivorship outcomes. However, up to 20% of patients report dissatisfaction following TKA. Recent studies have suggested that this may be at least partially due to suboptimal limb alignment or ligament imbalance. This study compared clinical outcomes at 1 year post-operatively (i.e., the 2011 Knee Society Score [KSS] and Forgotten Joint Score [FJS]) between two robotic-assisted personalised TKA techniques: functional alignment (FA) and an original technique combining restricted kinematic alignment (rKA) with a load sensor to achieve reliable ligament balancing (via bone re-cutting with a robotic arm). Methods: This single-centre, prospective, comparative study was performed at a robotic-assisted arthroplasty centre. The study population consisted of an FA group (43 patients) and rKA/sensor group (47 patients). Clinical outcomes were measured at 1 month post-operatively (visual analogue scale [VAS] pain score, flexion, range of motion [ROM], use of a mobility aid and stiffness) and at 1 year (2011 KSS, FJS, VAS, flexion and ROM). Results: There were no statistical significant differences in 2011 KSS or FJS at 1 year post-operatively between the two groups. Multivariate analysis showed no independent association of either technique with the 1-year follow-up KSS Objective Knee Indicators score (adjusted beta coefficient (aβ) = −2.371 [−7.380; 2.638], p = 0.357), KSS Patient Satisfaction score (aβ = −2.522 [−6.887; 1.842], p = 0.262), KSS Patient Expectations score (aβ = 0.629 [−0.928; 2.186], p = 0.431), KSS Functional Activities score (aβ = −3.399 [−10.881; 4.082], p = 0.377) or 1-year follow-up FJS (aβ = −5.168 [−19.887; 9.550], p = 0.494). Conclusions: There were no significant differences between the FA and rKA/load sensor groups in the 2011 KSS or FJS at 1 year post-operatively. To our knowledge, this is the first study to compare clinical outcomes between robotic-assisted FA TKA and rKA TKA. Clinical outcomes in the rKA/sensor group were similar to previous studies using rKA without robotic assistance or a load sensor. This was also the first report of the clinical outcomes of FA. The results need to be validated by larger scale studies to avoid potential type 2 errors. Full article
(This article belongs to the Section Orthopedics)
18 pages, 3352 KB  
Article
A Multi-Supervised Network for Real-Time and Accurate Semantic Segmentation in Underwater Scenes
by Yue Liu, Jun Ding, Mingze Xu, Zhigang Huang and Yiming Qiang
J. Mar. Sci. Eng. 2026, 14(4), 340; https://doi.org/10.3390/jmse14040340 - 10 Feb 2026
Abstract
Real-time semantic segmentation is a core perception capability for underwater robots and autonomous underwater vehicles (AUVs), yet it remains challenging because underwater imagery often exhibits low contrast, blurred boundaries, and strong appearance degradation under strict onboard computation budgets. This paper proposes MSNet, a [...] Read more.
Real-time semantic segmentation is a core perception capability for underwater robots and autonomous underwater vehicles (AUVs), yet it remains challenging because underwater imagery often exhibits low contrast, blurred boundaries, and strong appearance degradation under strict onboard computation budgets. This paper proposes MSNet, a multi-supervised two-pathway network that decouples feature learning into a semantic branch for context modeling and a detail branch for preserving high-resolution spatial information. MSNet introduces three complementary supervisory signals: (i) low-frequency semantic supervision derived from smoothed labels to encourage body semantics, (ii) high-frequency detail supervision derived from edge-enhanced labels to improve boundary localization, and (iii) category representation supervision implemented by a Category Representation Enhancement Module (CREM) to strengthen class discrimination at the deepest stage. To prevent auxiliary supervision from amplifying cross-resolution misalignment during fusion, we embed a Bilateral Flow-based Alignment Module (BFAM) into multi-stage feature fusion. Experiments on the SUIM benchmark show that MSNet achieves 79.83% mIoU and 86.57% F-score at 55 FPS with 6.2 M parameters on an RTX 3060 GPU, outperforming mainstream encoder–decoder and two-pathway algorithms. Compared with SFNet and BiSeNet V3, MSNet improves mIoU by 1.52% and 1.89%, and runs 9 FPS faster than SFNet. Ablation studies verify the effectiveness and complementarity of the proposed supervision and alignment strategies, indicating MSNet offers a practical accuracy–speed trade-off for marine engineering applications. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

17 pages, 3074 KB  
Article
Dual-Modal Vision–Sonar Object Detection for Underwater Robots Based on Deep Learning
by Xiaoming Wang, Zhenyu Wang and Dexue Bi
J. Mar. Sci. Eng. 2026, 14(4), 338; https://doi.org/10.3390/jmse14040338 - 10 Feb 2026
Abstract
Applying state-of-the-art RGB object detectors (e.g., YOLOv8) to underwater scenes often yields unstable performance due to scattering, absorption, illumination deficiency, and bandwidth-limited transmission that severely corrupt image contrast and details. Forward-looking sonar (FLS) remains informative in turbid or low-visibility water, yet its low [...] Read more.
Applying state-of-the-art RGB object detectors (e.g., YOLOv8) to underwater scenes often yields unstable performance due to scattering, absorption, illumination deficiency, and bandwidth-limited transmission that severely corrupt image contrast and details. Forward-looking sonar (FLS) remains informative in turbid or low-visibility water, yet its low resolution and weak semantics make conventional fusion architectures costly and difficult to deploy on resource-constrained robots. This paper proposes a paired-sample-free RGB–FLS joint training paradigm based on parameter sharing, where RGB and FLS images from different datasets are jointly used during training without any frame-level pairing or architectural modification. The resulting model preserves the original detector parameter scale and inference cost, and requires only RGB input at test time. Experiments on the SeaClear and Marine Debris FLS datasets under six representative underwater degradation factors (contrast loss, blur, resolution reduction, color cast, and JPEG compression) show consistent robustness gains over RGB-only training. In particular, under severe low-contrast corruption, the proposed training strategy improves mAP50 by more than 14 percentage points compared with the RGB-only baseline. These results indicate that sonar-domain supervision functions as an auxiliary structural constraint during optimization, rather than a conventional multi-source data enlargement. By forcing a shared-parameter detector to fit a texture-poor, geometry-dominant sonar domain, the learned representation is biased away from color/texture shortcuts and becomes more stable under adverse underwater degradations, without increasing deployment complexity. Full article
(This article belongs to the Special Issue Advances in Marine Autonomous Vehicles)
Show Figures

Figure 1

36 pages, 3799 KB  
Article
Study and Implementation of State Observers for Flexible Industrial Manipulators Including Friction
by Matilde Zampolini, Marina Indri and Aldo Maria Bottero
Appl. Sci. 2026, 16(4), 1743; https://doi.org/10.3390/app16041743 - 10 Feb 2026
Abstract
The experimentation of state observers for the reconstruction of the angular velocity of the links of a flexible industrial manipulator is investigated in this paper, in the presence of unmodeled or uncertain parts. Considering only one axis moving at a time, a study [...] Read more.
The experimentation of state observers for the reconstruction of the angular velocity of the links of a flexible industrial manipulator is investigated in this paper, in the presence of unmodeled or uncertain parts. Considering only one axis moving at a time, a study is done to understand how faithfully the dynamics of the machine can be reconstructed using simple single axis models, extending them to take into account the multi-variable dynamics of the system and trying to reconstruct the action of non-linear friction as well. The goal is to show how a good estimate of the interactions between the links can be obtained, with the final aim of including it into a control architecture. Various models of different complexities have been tested with both the asymptotic Luenberger observer and the steady-state Kalman filter. The presence of friction is taken into account by a feedforward compensation or by the addition of a disturbance observer synthesized as a pole placement regulator. First, the observers are tested in simulation, then on real data from a Comau Racer 7-1.0 robot. To evaluate the quality of the reconstruction, a virtual sensor obtained from the identification of the manipulator is used, and then a final test is carried out using a real Xsens gyroscope. An accurate analysis of the achieved results is provided, devoting a particular attention to the trade-off between model complexity, estimate accuracy and computational burden in view of a possible future insertion into the control architecture of an industrial robot. Full article
(This article belongs to the Special Issue Feature Papers in Robotics and Automation)
Show Figures

Figure 1

46 pages, 4553 KB  
Review
A Review of Computational Modeling of Polymer Composites and Nanocomposites
by Zhangke Yang and Zhaoxu Meng
Polymers 2026, 18(4), 443; https://doi.org/10.3390/polym18040443 - 10 Feb 2026
Abstract
Polymer composites and nanocomposites have become indispensable in aerospace, automotive, energy, electronics, soft robotics, and biomedical applications due to their high specific stiffness, strength, and manufacturability with highly tailorable multifunctional performance. Their rational design is complicated by strong, multiscale couplings among microstructural heterogeneity, [...] Read more.
Polymer composites and nanocomposites have become indispensable in aerospace, automotive, energy, electronics, soft robotics, and biomedical applications due to their high specific stiffness, strength, and manufacturability with highly tailorable multifunctional performance. Their rational design is complicated by strong, multiscale couplings among microstructural heterogeneity, interfacial physics, anisotropic response, and time- and temperature-dependent behavior, spanning molecular to structural length scales. This review provides a comprehensive survey of the principal computational methodologies used to predict and interpret the mechanical behavior of polymer composites and nanocomposites, highlighting the capabilities, specialties, and complementary roles of different modeling tools. This review first summarizes the essential physical characteristics governing polymer composites and nanocomposites. We then examine computational modeling approaches for polymer composites across four length scales: the constituent scale, microscale, mesoscale, and macroscale. For each scale, the primary modeling objectives, characteristic capabilities, and domains of applicability are discussed in the context of the existing literature. Cross-scale relationships and bridging strategies among these scales are also discussed, emphasizing how lower-scale simulations inform higher-scale models. The review then focuses on computational modeling of polymer nanocomposites, with particular attention to atomistic and coarse-grained molecular dynamics methods. Representative atomistic simulations, which capture interfacial structure, reinforcement–matrix interactions, and nanoscale mechanisms, are discussed. This is followed by discussions on coarse-grained approaches that extend the accessible length and time scales. Finally, we discuss how atomistic and coarse-grained models complement each other within integrated multiscale frameworks, enabling predictive links between nanoscale physics and macroscopic mechanical behaviors. Full article
(This article belongs to the Special Issue Computational Modeling of Polymer Composites and Nanocomposites)
Show Figures

Graphical abstract

24 pages, 28367 KB  
Article
Hybrid Offline–Online Configuration Planning Approach for Continuum Robots Based on Real-Time Shape Estimation
by Hexiang Yuan, Zhibo Jing, Yibo He, Jianda Han and Juanjuan Zhang
Sensors 2026, 26(4), 1129; https://doi.org/10.3390/s26041129 - 10 Feb 2026
Abstract
Continuum robots possess highly flexible backbones, enabling remarkable adaptability and dexterity for motion in confined environments. However, this flexibility also introduces significant nonlinearities and uncertainties, making motion planning under physical constraints particularly challenging. To address this, a hybrid offline–online configuration planning framework is [...] Read more.
Continuum robots possess highly flexible backbones, enabling remarkable adaptability and dexterity for motion in confined environments. However, this flexibility also introduces significant nonlinearities and uncertainties, making motion planning under physical constraints particularly challenging. To address this, a hybrid offline–online configuration planning framework is proposed in this work. Specifically, the configuration planning problem is formulated as a nonlinear optimization task that considers collision avoidance and structural constraints. A co-evolutionary strategy is incorporated into the differential evolution (DE) algorithm to decompose the target high-dimensional optimization problem. Then, an unscented Kalman filter (UKF)-based strategy is presented for real-time shape estimation using tip pose feedback for safe distance monitoring. Based on this shape feedback, an online configuration refiner is designed to locally adjust the preplanned configurations, thus leveraging the global perspective of the offline planning configuration to steer the continuum manipulator through constrained spaces. Validation and comparative experiments demonstrate the effectiveness of the proposed method, as well as its enhanced motion smoothness and safe motion performance in real-world environments. Full article
(This article belongs to the Special Issue Applied Robotics in Mechatronics and Automation)
Show Figures

Figure 1

23 pages, 4117 KB  
Perspective
Haptic and Palpation Sensing for Robotic Surgery: Engineering Perspectives on Design and Integration
by Michael H. Friebe
Sensors 2026, 26(4), 1126; https://doi.org/10.3390/s26041126 - 10 Feb 2026
Abstract
Robotic-assisted surgery (RAS) provides enhanced dexterity and visualisation but remains constrained by the absence of clinically meaningful palpation and haptic feedback. This perspective examines palpation sensing in RAS from an engineering and system-integration standpoint, identifying the lack of tactile information as a major [...] Read more.
Robotic-assisted surgery (RAS) provides enhanced dexterity and visualisation but remains constrained by the absence of clinically meaningful palpation and haptic feedback. This perspective examines palpation sensing in RAS from an engineering and system-integration standpoint, identifying the lack of tactile information as a major contributor to increased cognitive load, prolonged training, and risk of tissue injury. Recent advances in force, tactile, vibroacoustic, audio, and optical sensor technologies enable quantitative assessment of tissue mechanical properties and often exceed human tactile sensitivity. However, clinical translation is limited by challenges in sensor miniaturisation, sterilisation, robustness and integration and the absence of standardised evaluation metrics. The integration of artificial intelligence and multimodal sensor fusion with intra-operative imaging and augmented visualisation is highlighted as a key strategy to compensate for sensor limitations and biological variability. Dedicated robotic palpation devices and wireless or magnetically coupled probes are discussed as promising transitional solutions. Overall, the restoration of palpation sensing is presented as a prerequisite for improving safety and efficiency and enabling higher levels of autonomy in future RAS platforms. Full article
(This article belongs to the Special Issue Intelligent Optical Sensors in Biomedicine and Robotics)
Show Figures

Figure 1

40 pages, 21213 KB  
Article
Intuitive, Low-Cost Cobot Control System for Novice Operators, Using Visual Markers and a Portable Localisation Scanner
by Peter George, Chi-Tsun Cheng and Toh Yen Pang
Machines 2026, 14(2), 201; https://doi.org/10.3390/machines14020201 - 9 Feb 2026
Abstract
Collaborative robots (cobots) can work cooperatively alongside humans, while contributing to task automation in industries such as manufacturing. Designed with enhanced safety features, cobots can safely assist a range of users, including those with no previous robotics experience. Despite the human-centric design of [...] Read more.
Collaborative robots (cobots) can work cooperatively alongside humans, while contributing to task automation in industries such as manufacturing. Designed with enhanced safety features, cobots can safely assist a range of users, including those with no previous robotics experience. Despite the human-centric design of cobots, programming them can be challenging for novice operators, who may lack the skills and understanding of robotics. If left with a choice between major worker upskilling or replacement and investing in expensive and complex precision cobot positioning and object-detection systems, business owners may be reluctant to embrace cobot ownership. Furthermore, if a cobot’s primary intended tasks were simple Pick-and-Place operations, the tenuous return on investment, compared to retaining current manual processes, could make cobot adoption financially impracticable. This paper proposes a low-cost cobot control system (LCCS), an intuitive cobot solution for Pick-and-Place tasks, designed for novice cobot operators. Off-the-shelf vision-based positioning solutions, priced at around $US20,000, are typically designed to be assigned to a single cobot. The LCCS comprises a Raspberry Pi, a standard USB webcam and ArUco fiducial markers, which can easily be incorporated into a multi-cobot operation, with a combined total hardware cost of around $US100. The system scales simply and economically to support an expanding operation and it is easy to use It allows a user to specify a target pick location by positioning a portable localisation scanner upon an object to be grasped by the cobot end-effector. The scanner’s integrated webcam captures the location and orientation perspective from ArUco markers affixed to predefined positions outside the cobot workspace. By pressing a switch mounted on the scanner, the user relays the captured information, converted to 3D coordinates, to the cobot controller. Finally, the cobot’s integrated processor calculates the corresponding pose using inverse kinematics, which allows the cobot to move to the target position. Subsequent actions can be pre-programmed as required, as part of the initial system configuration. Preliminary testing indicates that the proposed system provides accurate and repeatable localisation information, with a mean positional error below 3.5 mm and a mean standard deviation less than 1.8. With a hardware investment just 0.3% of the UR5e purchase price, an easy to use, customisable, and easily scalable vision-based Pick-and-Place localisation system for cobots can be implemented. It has the potential to be a reliable and robust system that significantly lowers cobot operation barriers for novice operators by alleviating the programming requirement. By reducing the reliance on experienced programmers in a production environment, cobot tasks could be deployed more rapidly and with greater flexibility. Full article
(This article belongs to the Special Issue Artificial Intelligence and Robotics in Manufacturing and Automation)
Show Figures

Figure 1

11 pages, 863 KB  
Review
Transaxillary Robotic Thyroidectomy: A Novel Technique and Update
by Barbara Mullineris, Alice Francescato, Giovanni Colli, Davide Gozzo, Silvia Traficante and Micaela Piccoli
J. Clin. Med. 2026, 15(4), 1372; https://doi.org/10.3390/jcm15041372 - 9 Feb 2026
Abstract
Gasless Transaxillary Robotic Thyroidectomy (G-TART) has undergone significant refinement through the adoption of novel strategies to enhance surgical precision and safety. In this paper, we describe a novel technique that integrates dynamic endoscope repositioning, called the “swing technique”, with the use of a [...] Read more.
Gasless Transaxillary Robotic Thyroidectomy (G-TART) has undergone significant refinement through the adoption of novel strategies to enhance surgical precision and safety. In this paper, we describe a novel technique that integrates dynamic endoscope repositioning, called the “swing technique”, with the use of a specialized intraoperative neuromonitoring (IONM) probe—Modena Robotic Probe—designed for robotic applications. The procedure, performed using the Da Vinci Xi system (Intuitive Surgical, Sunnyvale, CA, USA), incorporates intermittent IONM during recurrent laryngeal nerve (RLN) dissection. The swing technique involves real-time adjustment of the 30° endoscope between robotic ports to improve visualization within the confined transaxillary (TA) surgical field, particularly during contralateral dissection. Simultaneously, the Modena Robotic Probe, a custom monopolar stimulation probe developed in collaboration with Dr. Langer Medical GmbH for connection to the AVALANCHE® SI2 neuromonitor, allows precise RLN mapping and verification throughout the operation. This approach could facilitate accurate anatomical tracking, minimize the risk of thermal or mechanical nerve injury, and enable safe navigation in a narrow operative TA tunnel. The adoption of advanced imaging techniques in conjunction with specialized robotic instrumentation may contribute to enhanced surgical safety and accuracy, emphasizing the importance of procedure-specific robotic approaches in thyroid surgery. Full article
(This article belongs to the Special Issue Endocrine Surgery: Current Developments and Trends)
27 pages, 6570 KB  
Article
LiDAR–Inertial–Visual Odometry Based on Elastic Registration and Dynamic Feature Removal
by Qiang Ma, Fuhong Qin, Peng Xiao, Meng Wei, Sihong Chen, Wenbo Xu, Xingrui Yue, Ruicheng Xu and Zheng He
Electronics 2026, 15(4), 741; https://doi.org/10.3390/electronics15040741 - 9 Feb 2026
Abstract
Simultaneous Localization and Mapping (SLAM) is a fundamental capability for autonomous robots. However, in highly dynamic scenes, conventional SLAM systems often suffer from degraded accuracy due to LiDAR motion distortion and interference from moving objects. To address these challenges, this paper proposes a [...] Read more.
Simultaneous Localization and Mapping (SLAM) is a fundamental capability for autonomous robots. However, in highly dynamic scenes, conventional SLAM systems often suffer from degraded accuracy due to LiDAR motion distortion and interference from moving objects. To address these challenges, this paper proposes a LiDAR–Inertial–Visual odometry framework based on elastic registration and dynamic feature removal, with the aim of enhancing system robustness through detailed algorithmic supplements. In the LiDAR odometry module, an elastic registration-based de-skewing method is introduced by modeling second-order motion, enabling accurate point cloud correction under non-uniform motion. In the visual odometry module, a multi-strategy dynamic feature suppression mechanism is developed, combining IMU-assisted motion consistency verification with a lightweight YOLOv5-based detection network to effectively filter out dynamic interference with low computational overhead. Furthermore, depth information for visual key points is recovered using LiDAR assistance to enable tightly coupled pose estimation. Extensive experiments on the TUM and M2DGR datasets demonstrate that the proposed method achieves a 96.3% reduction in absolute trajectory error (ATE) compared with ORB-SLAM2 in highly dynamic scenarios. Real-world deployment on an embedded computing device further confirms the framework’s real-time performance and practical applicability in complex environments. Full article
Show Figures

Figure 1

29 pages, 4720 KB  
Article
Selective Downsampling for Fast and Accurate 3D Global Registration with Applications in Medical Imaging
by Roč Stilinović, Marko Švaco, Bojan Šekoranja and Filip Šuligoj
Mathematics 2026, 14(4), 606; https://doi.org/10.3390/math14040606 - 9 Feb 2026
Abstract
Robust global point-cloud registration remains a key challenge in robotic neurosurgery, particularly for markerless patient registration, where anatomical surface acquisition can be incomplete and noisy. This paper proposes practical pre-processing steps, defines performance criteria, and evaluates the keypoint-based 4-Points Congruent Set (K4PCS) and [...] Read more.
Robust global point-cloud registration remains a key challenge in robotic neurosurgery, particularly for markerless patient registration, where anatomical surface acquisition can be incomplete and noisy. This paper proposes practical pre-processing steps, defines performance criteria, and evaluates the keypoint-based 4-Points Congruent Set (K4PCS) and Super4PCS algorithms for global registration. Experiments are conducted on surface point clouds segmented from real patient head CT scans, while all measurement errors are synthetically simulated by applying clinically relevant perturbations, including large initial misalignment, Gaussian (CT-like) and non-Gaussian (camera-like) noise injection, and partial scans, across 30 different poses. Registration performance is quantified using pose errors and noise-aware surface-distance/overlap measures, while run-time is assessed under a newly developed selective downsampling strategy and compared to standard voxel downsampling. Results show that both algorithms reliably converge from substantial misalignment and remain robust after noise injection, with computation times ranging from 0.1 s to >15 min. Partial-to-whole registration achieves accuracy comparable to whole-to-whole registration (errors <103 mm), but typically exceeds real-time run-times. Selective downsampling provides a clear improvement in precision and, in most cases, also improves speed compared to the voxel-based downsampling method. Overall, the results indicate that robust and real-time markerless head registration is feasible under clinical conditions. Full article
Back to TopTop