Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,005)

Search Parameters:
Keywords = autonomous robot

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
39 pages, 1642 KB  
Article
A Post-Quantum Secure Architecture for 6G-Enabled Smart Hospitals: A Multi-Layered Cryptographic Framework
by Poojitha Devaraj, Syed Abrar Chaman Basha, Nithesh Nair Panarkuzhiyil Santhosh and Niharika Panda
Future Internet 2026, 18(3), 165; https://doi.org/10.3390/fi18030165 - 20 Mar 2026
Abstract
Future 6G-enabled smart hospital infrastructures will support latency-critical medical operations such as robotic surgery, autonomous monitoring, and real-time clinical decision systems, which require communication mechanisms that ensure both ultra-low latency and long-term cryptographic security. Existing security solutions either rely on classical cryptographic protocols [...] Read more.
Future 6G-enabled smart hospital infrastructures will support latency-critical medical operations such as robotic surgery, autonomous monitoring, and real-time clinical decision systems, which require communication mechanisms that ensure both ultra-low latency and long-term cryptographic security. Existing security solutions either rely on classical cryptographic protocols that are vulnerable to quantum attacks or deploy isolated post-quantum primitives without providing a unified framework for secure real-time medical command transmission. This research presents a latency-aware, multi-layered post-quantum security architecture for 6G-enabled smart hospital environments. The proposed framework establishes an end-to-end secure command transmission pipeline that integrates hardware-rooted device authentication, post-quantum key establishment, hybrid payload protection, dynamic access enforcement, and tamper-evident auditing within a coherent system design. In contrast to existing approaches that focus on individual security mechanisms, the architecture introduces a structured integration of Kyber-based key encapsulation and Dilithium digital signatures with hybrid AES-based encryption and legacy-compatible key transport, while Physical Unclonable Function authentication provides hardware-bound device identity verification. Zero Trust access control, metadata-driven anomaly detection, and blockchain-style audit logging provide continuous verification and traceability, while threshold cryptography distributes cryptographic authority to eliminate single points of compromise. The proposed architecture is evaluated using a discrete-event simulation framework representing adversarial conditions in realistic 6G medical communication scenarios, including replay attacks, payload manipulation, and key corruption attempts. Experimental results demonstrate improved security and operational efficiency, achieving a 48% reduction in detection latency, a 68% reduction in false-positive anomaly detection rate, and a 39% improvement in end-to-end round-trip latency compared to conventional RSA-AES-based architectures. These results demonstrate that the proposed framework provides a practical and scalable approach for achieving post-quantum secure and low-latency command transmission in next-generation 6G smart hospital systems. Full article
(This article belongs to the Special Issue Key Enabling Technologies for Beyond 5G Networks—2nd Edition)
Show Figures

Graphical abstract

25 pages, 12478 KB  
Article
RD-GuideNet: A Depth-Guided Framework for Robust Detection, Segmentation, and Temporal Tracking of White Button Mushrooms
by Namrata Dutt, Daeun Choi, Yiannis Ampatzidis, Won Suk Lee, Sanjeev J. Koppal and Xu Wang
Sensors 2026, 26(6), 1935; https://doi.org/10.3390/s26061935 - 19 Mar 2026
Abstract
Mushroom farms in the United States continue to face persistent labor shortages, especially during the harvesting of white button mushrooms (Agaricus bisporus) which requires selective picking by skilled workers. This study addresses this challenge by developing a depth-guided computer vision framework [...] Read more.
Mushroom farms in the United States continue to face persistent labor shortages, especially during the harvesting of white button mushrooms (Agaricus bisporus) which requires selective picking by skilled workers. This study addresses this challenge by developing a depth-guided computer vision framework for automated mushroom detection, segmentation, and tracking to support timely harvest decisions, providing the foundation needed to support selective and timely robotic harvesting. The specific objectives of the study were to (1) develop a novel image-processing algorithm (RD-GuideNet) that integrates RGB and depth images for accurate detection and segmentation of mushrooms; (2) implement a custom depth-guided tracking algorithm to preserve mushroom identities across sequential frames; (3) compare the performance of RD-GuideNet against state-of-the-art deep learning models, YOLOv8 and YOLOv11, focusing on segmentation and tracking accuracies. The proposed RD-GuideNet achieved an F1-score of 0.93 for segmentation, outperforming YOLOv8 (0.88) and YOLOv11 (0.86), and produced sharper, more geometrically consistent boundaries that closely followed true mushroom cap contours. Its tracking consistency reached 92.7%, compared to YOLOv8 (95.3%) and YOLOv11 (94.6%). Although slightly lower, RD-GuideNet maintained high temporal consistency across dense mushroom beds. These results suggest that depth-based geometric reasoning and deep learning approaches exhibit complementary strengths in dense production scenes. Combining the two may further enhance detection reliability and shape fidelity, supporting high-precision perception for autonomous mushroom harvesting. A comprehensive quantitative evaluation of such a hybrid framework will be investigated in future work. Full article
(This article belongs to the Special Issue Feature Papers in Smart Agriculture 2026)
Show Figures

Figure 1

30 pages, 2311 KB  
Review
Trust Assessment Methods for Blockchain-Empowered Internet of Things Systems: A Comprehensive Review
by Mostafa E. A. Ibrahim, Yassine Daadaa and Alaa E. S. Ahmed
Appl. Sci. 2026, 16(6), 2949; https://doi.org/10.3390/app16062949 - 18 Mar 2026
Viewed by 60
Abstract
The Internet of things (IoT) is rapidly pervading daily life and linking everything. Although higher connectivity offers many benefits, including higher productivity, robotic processes, and decision-making guided by data, it also poses a number of security dangers. Modern risks to data authenticity and [...] Read more.
The Internet of things (IoT) is rapidly pervading daily life and linking everything. Although higher connectivity offers many benefits, including higher productivity, robotic processes, and decision-making guided by data, it also poses a number of security dangers. Modern risks to data authenticity and confidence are getting harder to handle through typical central safety solutions. In this paper, we present a detailed investigation of the latest innovations and approaches for assessing reputation and confidence in the blockchain-empowered Internet of Things (BIoT) area. A comprehensive literature search was conducted across major electronic databases, including IEEE, Springer, Elsevier, Wiley, MDPI, and top indexed conference proceedings. The publication year was restricted to the period from 2018 to 2025. The methodological quality of a total of 122 studies met the inclusion criteria assessed using predefined quality measures. We figure out existing flaws at each layer of IoT architecture, illustrating how autonomous, transparent, and impenetrable blockchain ledgers address these flaws. Plus, we analytically compare public, private, consortium, and hybrid blockchain networking architectures to emphasize the underlying compromises among security, reliability, and decentralization. We also assess how reputation evaluation techniques evolved over time, moving from classical fuzzy logic and weighted average models to modern mature game theory and machine learning (ML) models, addressing their limitations in terms of computational overhead, scalability, adaptability, and deployment feasibility in IoT systems. Additionally, we outline future directions for BIoT system trust assessment and identify research limitations and potential solutions. Our research indicates that although ML-driven models offer more accurate predictions for identifying illicit node activities, they are still constrained by limited unbalanced data and high processing overhead. Full article
(This article belongs to the Special Issue Advanced Blockchain Technologies and Their Applications)
Show Figures

Figure 1

25 pages, 1126 KB  
Article
Energy-Efficient Path Planning for AMR Using Modified A* Algorithm with Machine Learning Integration
by Mishell Cadena-Yanez, Danel Rico-Melgosa, Ekaitz Zulueta, Angela Bernardini and Jorge Rodriguez-Guerra
Robotics 2026, 15(3), 62; https://doi.org/10.3390/robotics15030062 - 18 Mar 2026
Viewed by 36
Abstract
Energy consumption optimisation has emerged as a critical need in Autonomous Mobile Robots (AMRs). Conventional A* implementations typically minimise path distance, neglecting energy-relevant factors such as directional changes and trajectory smoothness that significantly impact battery life and operational costs. This work proposes two [...] Read more.
Energy consumption optimisation has emerged as a critical need in Autonomous Mobile Robots (AMRs). Conventional A* implementations typically minimise path distance, neglecting energy-relevant factors such as directional changes and trajectory smoothness that significantly impact battery life and operational costs. This work proposes two energy-aware A* variants trained on empirical data from the KUKA KMP 1500 platform, where energy consumption is measured as battery SoC depletion: A*-RF, which integrates a Random Forest (RF) model directly into the cost function, and A*-MOD, which approximates the energy model through RF feature importance weights, achieving linear computational complexity O(nf). The RF model predicted energy consumption with an RMSE below 1.5% relative error, identifying travel distance and rotation angle as the dominant energy factors. Experimental validation across 42 path planning scenarios on a real industrial factory floor demonstrates that A*-MOD reduces energy consumption by up to 58.91% and improves operational autonomy by 2.21 times, with statistically significant improvements (p < 0.01) across all evaluated metrics. Full article
(This article belongs to the Section Sensors and Control in Robotics)
Show Figures

Figure 1

36 pages, 47250 KB  
Article
PIRATE—Precision Imaging Real-Time Autonomous Tracker & Explorer
by Dan Zlotnikov and Ohad Ben-Shahar
J. Mar. Sci. Eng. 2026, 14(6), 558; https://doi.org/10.3390/jmse14060558 - 17 Mar 2026
Viewed by 98
Abstract
We present PIRATE (Precision Imaging Real-time Autonomous Tracker and Explorer), a fully autonomous unmanned surface vehicle designed to enable self-operating data collection and persistent tracking of mobile underwater targets through the tight integration of acoustic localization, onboard visual perception, and closed-loop navigation. PIRATE [...] Read more.
We present PIRATE (Precision Imaging Real-time Autonomous Tracker and Explorer), a fully autonomous unmanned surface vehicle designed to enable self-operating data collection and persistent tracking of mobile underwater targets through the tight integration of acoustic localization, onboard visual perception, and closed-loop navigation. PIRATE employs a single mobile acoustic receiver to estimate target position using time-difference-of-arrival (TDoA) measurements acquired at different times and locations through planned autonomous motion and uses these estimates to drive adaptive vehicle behavior and activate fine-grained visual sensing in real time. This architecture enables sustained target-driven operation, in which navigation, acoustic monitoring, and visual processing are dynamically coordinated based on mission context and localization uncertainty. The system integrates real-time AI-based visual detection and tracking with automatic mission control, allowing visual perception to operate opportunistically within an acoustically guided tracking loop rather than as a standalone sensing modality. Field experiments in a shallow-water environment demonstrate reliable autonomous navigation, single-receiver acoustic localization with meter-scale accuracy, and stable onboard visual inference under sustained operation. By enabling coupled acoustic tracking and onboard visual perception in a fully autonomous surface platform free of external infrastructure, PIRATE provides a practical foundation for fine-scale behavioral observation, adaptive marine monitoring, and long-duration studies of mobile underwater organisms. We demonstrate this advantage with two possible applications. Full article
(This article belongs to the Special Issue Design and Application of Underwater Vehicles)
Show Figures

Figure 1

26 pages, 10218 KB  
Article
Self-Adaptive Ant Colony Optimization with Bidirectional Updating for Robot Path Planning
by Yixuan Zhang, Shaoxin Sun, Yin Wang and Yiyang Yuan
Appl. Sci. 2026, 16(6), 2870; https://doi.org/10.3390/app16062870 - 17 Mar 2026
Viewed by 179
Abstract
Mobile robot path planning using Ant Colony Optimization (ACO) has the disadvantages of slow convergence, local optima, and unsmooth paths because of fixed heuristics and constant pheromone updating. In this paper, Self-Adaptive Risk-Aware Bidirectional updating ACO (SAR-BACO) is proposed with three improvements: (1) [...] Read more.
Mobile robot path planning using Ant Colony Optimization (ACO) has the disadvantages of slow convergence, local optima, and unsmooth paths because of fixed heuristics and constant pheromone updating. In this paper, Self-Adaptive Risk-Aware Bidirectional updating ACO (SAR-BACO) is proposed with three improvements: (1) composite heuristic incorporating target attraction, obstacle repulsion and direction consistency to minimize early blind searching; (2) dynamic pheromone updating based on solution quality and number of iterations to balance exploration and exploitation; (3) triangular pruning to remove redundant turning points and become smoother. Theoretical analysis verifies convergence. Our experimental results on grids up to 50 × 50 demonstrate that SAR-BACO performs much better than classical and heuristic-improved algorithms with respect to the length of a path, convergence rate, smoothness and efficiency. Using SAR-BACO on a 50 × 50 map, the path lengths, convergence iterations and turning points decreased by 60.68%, 48.96%, and 96.00% respectively compared to Basic ACO (after triangular pruning, values averaged over 50 runs). The framework provides a generalizable solution to autonomous navigation with the need to consider both search efficiency and path executability. Full article
Show Figures

Figure 1

25 pages, 7474 KB  
Article
Push-or-Avoid: Deep Reinforcement Learning of Obstacle-Aware Harvesting for Orchard Robots
by Heng Fu, Tao Li, Qingchun Feng and Liping Chen
Agriculture 2026, 16(6), 670; https://doi.org/10.3390/agriculture16060670 - 16 Mar 2026
Viewed by 187
Abstract
In structured orchard environments, harvesting robots operate where rigid bodies (e.g., trunks, poles, and wires) coexist with flexible foliage. Strict avoidance of all obstacles significantly compromises operational efficiency. To address this, this study proposes an end-to-end autonomous harvesting framework characterized by an “avoid-rigid, [...] Read more.
In structured orchard environments, harvesting robots operate where rigid bodies (e.g., trunks, poles, and wires) coexist with flexible foliage. Strict avoidance of all obstacles significantly compromises operational efficiency. To address this, this study proposes an end-to-end autonomous harvesting framework characterized by an “avoid-rigid, push-through-soft” strategy. This framework explicitly propagates uncertainties from sensor data and reconstruction processes into the planning and policy phases. First, a multi-task perception network acquires 2D semantic masks of fruits and branches. Class probabilities and instance IDs are back-projected onto a 3D Gaussian Splatting (3DGS) representation to construct a decision-oriented, semantically enhanced 3D scene model. The policy network accepts multi-channel 3DGS rendered observations and proprioceptive states as inputs, outputting a continuous preference vector over eight predefined motion primitives. This approach unifies path planning and action decision-making within a single closed loop. Additionally, a dynamic action shielding module was designed to perform look-ahead collision risk assessments on candidate discrete actions. By employing an action mask to block actions potentially colliding with rigid obstacles, high-risk behaviors are effectively suppressed during both training and execution, thereby enhancing the robustness and reliability of robotic manipulation. The proposed method was validated in both simulation and real-world scenarios. In complex orchard scenarios, the proposed AE-TD3 algorithm achieves a harvesting success rate of 77.1%, outperforming existing RRT (53.3%), DQN (60.9%), and TD3 (63.8%) methods. Furthermore, the method demonstrates superior safety and real-time performance, with a collision rate reduced to 16.2% and an average operation time of only 12.4 s. Results indicate that the framework effectively supports efficient harvesting operations while ensuring safety. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

23 pages, 6668 KB  
Article
Development of a Visual SLAM-Based Autonomous UAV System for Greenhouse Plant Monitoring
by Jing-Heng Lin and Ta-Te Lin
Drones 2026, 10(3), 205; https://doi.org/10.3390/drones10030205 - 15 Mar 2026
Viewed by 169
Abstract
Autonomous monitoring is essential for precision agriculture in greenhouses, yet deploying unmanned aerial vehicles (UAVs) in confined, GPS-denied environments remains limited by payload, power, and cost constraints. This study developed and validated an autonomous UAV system for reliable, low-cost operation in such conditions. [...] Read more.
Autonomous monitoring is essential for precision agriculture in greenhouses, yet deploying unmanned aerial vehicles (UAVs) in confined, GPS-denied environments remains limited by payload, power, and cost constraints. This study developed and validated an autonomous UAV system for reliable, low-cost operation in such conditions. The proposed system employs a dual-link edge-computing architecture: a lightweight onboard controller handles flight control and sensor acquisition, while visual simultaneous localization and mapping (V-SLAM) is offloaded to an edge computer via the FPV video link. Phenotyping (flower detection and tracking/counting) is performed offline from the side-view RGB stream and does not participate in the flight control loop. Using muskmelon (Cucumis melo L.) flower development as a case study, the UAV autonomously executed daily missions for 27 days in a commercial greenhouse, performing flower detection and tracking to monitor phenological dynamics. Localization and control accuracy were evaluated against a validated UWB reference system, achieving 5.4~8.0 cm 2D RMSE for trajectory tracking and 12.7 cm translation RMSE for greenhouse mapping. This work demonstrates a practical architecture for autonomous monitoring in GPS-denied agricultural environments, with operational boundaries characterized through the sustained field deployment. The system’s design principles may extend to other indoor or communication-limited scenarios requiring lightweight, intelligent robotic operation. Full article
(This article belongs to the Section Drones in Agriculture and Forestry)
Show Figures

Figure 1

17 pages, 602 KB  
Review
Artificial Intelligence Applications in Gastric Cancer Surgery: Bridging Early Diagnosis and Responsible Precision Medicine
by Silvia Malerba, Miljana Vladimirov, Aman Goyal, Audrius Dulskas, Augustinas Baušys, Tomasz Cwalinski, Sergii Girnyi, Jaroslaw Skokowski, Ruslan Duka, Robert Molchanov, Bojan Jovanovic, Francesco Antonio Ciarleglio, Alberto Brolese, Kebebe Bekele Gonfa, Abdi Tesemma Demmo, Zilvinas Dambrauskas, Adolfo Pérez Bonet, Mario Testini, Francesco Paolo Prete, Valentin Calu, Natale Calomino, Vikas Jain, Aleksandar Karamarkovic, Karol Polom, Adel Abou-Mrad, Rodolfo J. Oviedo, Yogesh Vashist and Luigi Maranoadd Show full author list remove Hide full author list
J. Clin. Med. 2026, 15(6), 2208; https://doi.org/10.3390/jcm15062208 - 13 Mar 2026
Viewed by 434
Abstract
Background: Artificial intelligence is emerging as a promising tool in surgical oncology, with growing evidence suggesting potential applications in diagnostic support, intraoperative guidance, and perioperative risk assessment. In gastric cancer surgery, emerging applications range from AI-assisted endoscopic detection to data-driven perioperative risk [...] Read more.
Background: Artificial intelligence is emerging as a promising tool in surgical oncology, with growing evidence suggesting potential applications in diagnostic support, intraoperative guidance, and perioperative risk assessment. In gastric cancer surgery, emerging applications range from AI-assisted endoscopic detection to data-driven perioperative risk prediction, while some technological developments, particularly in robotic autonomy, derive from broader surgical or experimental models that may inform future gastric procedures. Methods: A narrative review was conducted following established methodological standards, including the Scale for the Assessment of Narrative Review Articles (SANRA) and the Search–Appraisal–Synthesis–Analysis (SALSA) framework. English-language studies indexed in PubMed, Scopus, Embase, and Web of Science up to October 2025 were included. Evidence was synthesized thematically across five domains: AI-assisted anatomical recognition and lymphadenectomy support, autonomous robotic systems, early cancer detection, perioperative predictive and frailty models, and ethical and regulatory considerations. Results: AI-based computer vision and deep learning algorithms have demonstrated promising capabilities for real-time anatomical recognition, surgical phase classification, and intraoperative guidance, although evidence of direct patient-level benefit remains limited. In diagnostic settings, AI-assisted endoscopy and Raman spectroscopy have been shown to improve early lesion detection and reduce dependence on operator experience. Predictive models, including MySurgeryRisk and AI-driven frailty assessments, may support individualized prehabilitation planning and perioperative risk stratification. Persistent limitations include small and heterogeneous datasets, insufficient external validation, and unresolved concerns related to data privacy, algorithmic interpretability, and medico-legal responsibility. Conclusions: Artificial intelligence is progressively emerging as a promising tool in gastric cancer surgery, integrating automation, advanced analytics, and human clinical reasoning. Its safe and ethical adoption requires robust validation, transparent governance, and continuous surgeon oversight. When developed within human-centered and ethically grounded frameworks, AI can augment, rather than replace, surgical expertise, potentially advancing precision, safety, and equity in oncologic care. Full article
Show Figures

Figure 1

20 pages, 6854 KB  
Article
TARTS: Training-Free Adaptive Reference-Guided Traversability Segmentation with Automated Footprint Supervision and Experimental Verification
by Shuhong Shi and Lingchuan Zeng
Electronics 2026, 15(6), 1194; https://doi.org/10.3390/electronics15061194 - 13 Mar 2026
Viewed by 110
Abstract
Autonomous mobile robots require robust traversability perception to navigate safely in diverse outdoor environments. However, traditional deep learning approaches are data-hungry, requiring large-scale manual annotations, and struggle to adapt quickly to unseen environments. This paper introduces TARTS (Training-free Adaptive Reference-guided Traversability Segmentation), a [...] Read more.
Autonomous mobile robots require robust traversability perception to navigate safely in diverse outdoor environments. However, traditional deep learning approaches are data-hungry, requiring large-scale manual annotations, and struggle to adapt quickly to unseen environments. This paper introduces TARTS (Training-free Adaptive Reference-guided Traversability Segmentation), a novel framework combining one-shot prototype initialization with trajectory-guided online adaptation for terrain segmentation. Using a single reference image of desired traversable terrain, TARTS establishes an initial prototype from pre-trained DINO Vision Transformer (ViT) features. The system performs segmentation through superpixel-based feature aggregation and valley-emphasis Otsu thresholding while continuously refining the prototype via Exponential Moving Average (EMA) updates driven by automated footprint supervision from the robot’s traversed trajectory. Extensive experiments on our introduced Reference-guided Traversability Segmentation Dataset (RTSD) and the challenging Off-Road Freespace Detection (ORFD) benchmark demonstrate strong performance, achieving 94.5% IoU on RTSD and 94.1% IoU on ORFD, outperforming state-of-the-art supervised methods that require multi-modal inputs and dedicated training. The framework maintains efficient performance (17–24 FPS) on embedded platforms, enabling practical deployment with only a reference image as initialization. Full article
Show Figures

Figure 1

32 pages, 10936 KB  
Article
PLM-Net: Perception Latency Mitigation Network for Vision-Based Lateral Control of Autonomous Vehicles
by Aws Khalil and Jaerock Kwon
Sensors 2026, 26(6), 1798; https://doi.org/10.3390/s26061798 - 12 Mar 2026
Viewed by 136
Abstract
This study introduces the Perception Latency Mitigation Network (PLM-Net), a modular deep learning framework designed to mitigate perception latency in vision-based imitation-learning lane-keeping systems. Perception latency, defined as the delay between visual sensing and steering actuation, can degrade lateral tracking performance and steering [...] Read more.
This study introduces the Perception Latency Mitigation Network (PLM-Net), a modular deep learning framework designed to mitigate perception latency in vision-based imitation-learning lane-keeping systems. Perception latency, defined as the delay between visual sensing and steering actuation, can degrade lateral tracking performance and steering stability. While delay compensation has been extensively studied in classical predictive control systems, its treatment within vision-based imitation-learning architectures under constant and time-varying perception latency remains limited. Rather than reducing latency itself, PLM-Net mitigates its effect on control performance through a plug-in architecture that preserves the original control pipeline. The framework consists of a frozen Base Model (BM), representing an existing lane-keeping controller, and a Timed Action Prediction Model (TAPM), which predicts future steering actions corresponding to discrete latency conditions. Real-time mitigation is achieved by interpolating between model outputs according to the measured latency value, enabling adaptation to both constant and time-varying latency. The framework is evaluated in a closed-loop deterministic simulation environment under fixed-speed conditions to isolate the impact of perception latency. Results demonstrate significant reductions in steering error under multiple latency settings, achieving up to 62% and 78% reductions in Mean Absolute Error (MAE) for constant and time-varying latency cases, respectively. These findings demonstrate the architectural feasibility of modular latency mitigation for vision-based lateral control under controlled simulation settings. The project page including video demonstrations, code, and dataset is publicly released. Full article
(This article belongs to the Special Issue Intelligent Control Systems for Autonomous Vehicles)
Show Figures

Figure 1

22 pages, 5676 KB  
Article
Complete Coverage Random Path Planning Based on a Novel Fractal-Fractional-Order Multi-Scroll Chaotic System
by Xiaoran Lin, Mengxuan Dong, Xueya Xue, Xiaojuan Li and Yachao Wang
Mathematics 2026, 14(5), 926; https://doi.org/10.3390/math14050926 - 9 Mar 2026
Viewed by 181
Abstract
With the increasing demands for autonomy and coverage efficiency in tasks such as security patrol and post-disaster exploration using mobile robots, achieving random, efficient, and complete coverage path planning has become a critical challenge. Traditional chaotic path planning methods, while capable of generating [...] Read more.
With the increasing demands for autonomy and coverage efficiency in tasks such as security patrol and post-disaster exploration using mobile robots, achieving random, efficient, and complete coverage path planning has become a critical challenge. Traditional chaotic path planning methods, while capable of generating unpredictable trajectories, still have limitations in terms of randomness strength, traversal uniformity, and convergence coverage. To address this, this study proposes a complete-coverage random path planning method based on a novel four-dimensional fractal-fractional multi-scroll chaotic system. The main contributions of this research are as follows: First, by introducing additional state variables and fractal-fractional operators into the classical Chen system, a fractal-fractional chaotic system with a multi-scroll attractor structure is constructed. The output of this system is then mapped into robot angular velocity commands to achieve area coverage in unknown environments. Key findings include: the novel chaotic system possesses two positive Lyapunov exponents; Spectral Entropy (SE) and Complexity (CO) analyses indicate that when parameter B is fixed and the fractional order α increases, the dynamic complexity of the system significantly rises; in a 50 × 50 grid environment, the robot driven by this system achieved a coverage rate of 98.88% within 10,000 iterations, outperforming methods based on Lorenz, Chua systems, and random walks; ablation experiments further demonstrate that the combined effects of the fractal order β, fractional order α, and multi-scroll nonlinear terms are key to enhancing system complexity and coverage performance. The significance of this study lies in that it not only provides new ideas for constructing complex chaotic systems but also offers a reliable theoretical foundation and practical solution for mobile robots to perform efficient, random, and high-coverage autonomous inspection tasks in unknown regions. Full article
Show Figures

Figure 1

64 pages, 9863 KB  
Review
Drone-Enabled Practices in Modern Warehouse Management: A Comprehensive Review
by Eknath Pore, Bhumeshwar K. Patle, Sandeep Thorat and Brijesh Patel
Drones 2026, 10(3), 189; https://doi.org/10.3390/drones10030189 - 9 Mar 2026
Viewed by 530
Abstract
The advent of drone technology has led to groundbreaking advancements across various industries, including warehousing operations. In recent years, warehouse drones have garnered significant attention due to their potential to revolutionize traditional inventory management and order fulfillment processes. This paper presents a comprehensive [...] Read more.
The advent of drone technology has led to groundbreaking advancements across various industries, including warehousing operations. In recent years, warehouse drones have garnered significant attention due to their potential to revolutionize traditional inventory management and order fulfillment processes. This paper presents a comprehensive review that synthesizes findings from more than 120 research papers on drone-enabled practices in warehouses. The review systematically considers multiple parameters, including drone function (inventory counting, mapping, surveillance, inspection, and intralogistics support), robot platforms used (UAV, UAV-AGV), deployment architecture (single and multi-drone system), validation approach (real-time and simulation), technology and methodology used (modern electronic devices, AI, and IOT), and environmental context (dynamic and static). Furthermore, the paper explores the diverse applications of warehouse drones in inventory management, maintenance and inspection, picking and packaging, goods transportation, security and surveillance, and warehouse layout optimization. The review highlights that most studies still rely on single-UAV systems tested mainly in simulations, with only a few real-time demonstrations of fully autonomous performance inside real warehouses. Although multi-drone approaches are emerging to improve scalability, they continue to struggle with coordination and safety. Research remains largely focused on static environments, with dynamic warehouse conditions receiving far less attention despite their practical importance. The findings of the review are presented with the tabulated results and a comparative table to provide a better understanding of the review work, which helps to identify the existing literature gap. The review presents its findings through clear tables and comparisons, making it easier to understand existing studies and pinpoint the gaps in the current literature. Full article
Show Figures

Figure 1

37 pages, 5507 KB  
Article
Target Tissue Identification Based on Image Processing for Regulating Automatic Robotic Lung Biopsy Sampler: Onsite Phantom Validation
by Maria Monserrat Diaz-Hernandez, Gerardo Ramirez-Nava and Isaac Chairez
Sensors 2026, 26(5), 1723; https://doi.org/10.3390/s26051723 - 9 Mar 2026
Viewed by 278
Abstract
Cancer is one of the global health problems that affects millions of people every year. Biopsies are among the standard methods for detecting and confirming a cancer diagnosis. Performing this study manually poses several challenges due to tissue movement and the difficulty of [...] Read more.
Cancer is one of the global health problems that affects millions of people every year. Biopsies are among the standard methods for detecting and confirming a cancer diagnosis. Performing this study manually poses several challenges due to tissue movement and the difficulty of precisely locating the target, as is often the case in lung biopsies. This study presents the design and implementation of an autonomous image processing algorithm included in a closed-loop controller that drives the activity of a multi-degree-of-freedom (six) robotic manipulator that performs emulated tissue biopsies. A realistic lung motion emulator, based on a two-degree-of-freedom robotic device with a photon emitter (to simulate radiopharmaceutical identification of cancerous tissue), was used to test the proposed automatic biopsy collector. Applying image processing to detect cancer tissue enables the identification of the centroid and tumor boundaries. Using the detected centroid coordinates, the reference trajectory of the end effector (biopsy needle) was automatically determined. A finite-time convergent controller was implemented to guide the robotic manipulator’s motion towards the tumor position within a specified time window. The controller was evaluated using a digital twin representation of the entire robotic system and using an experimental device working on the simulated mobile tumor emulator. Evaluation of simulated tumor detection and reference trajectory tracking effectiveness was used to validate the operation of the proposed automatic robotic lung biopsy sampler. The application of the controller allows one to track the position of the emulated tumor with a deviation of 0.52 mm and a settling time of less than 1 s. Full article
Show Figures

Figure 1

23 pages, 15691 KB  
Article
ProM-Pose: Language-Guided Zero-Shot 9-DoF Object Pose Estimation from RGB-D with Generative 3D Priors
by Yuchen Li, Kai Qin, Haitao Wu and Xiangjun Qu
Electronics 2026, 15(5), 1111; https://doi.org/10.3390/electronics15051111 - 7 Mar 2026
Viewed by 313
Abstract
Object pose estimation is fundamental for robotic manipulation, autonomous driving, and augmented reality, yet recovering the full 9-DoF state (rotation, translation, and anisotropic 3D scale) from RGB-D observations remains challenging for previously unseen objects. Existing methods either rely on instance-specific CAD models, predefined [...] Read more.
Object pose estimation is fundamental for robotic manipulation, autonomous driving, and augmented reality, yet recovering the full 9-DoF state (rotation, translation, and anisotropic 3D scale) from RGB-D observations remains challenging for previously unseen objects. Existing methods either rely on instance-specific CAD models, predefined category boundaries, or suffer from scale ambiguity under sparse observations. We propose ProM-Pose, a unified cross-modal temporal perception framework for zero-shot 9-DoF object pose estimation. By integrating language-conditioned generative 3D shape priors as canonical geometric references, an asymmetric cross-modal attention mechanism for spatially aware fusion, and a decoupled pose decoding strategy with temporal refinement, ProM-Pose constructs metrically consistent and semantically grounded representations without relying on category-specific pose priors or instance-level CAD supervision. Extensive experiments on CAMERA25 and REAL275 benchmarks demonstrate that ProM-Pose achieves competitive or superior performance compared to category-level methods, with mAP of 75.0% at 5°,2cm and 90.5% at 10°,5cm on CAMERA25, and 42.2% at 5°,2cm and 76.0% at 10°,5cm on REAL275 under zero-shot cross-domain evaluation. Qualitative results on real-world logistics scenarios further validate temporal stability and robustness under occlusion and lighting variations. ProM-Pose effectively bridges semantic grounding and metric geometric reasoning within a unified formulation, enabling stable and scale-aware 9-DoF pose estimation for previously unseen objects under open-vocabulary conditions. Full article
Show Figures

Figure 1

Back to TopTop