Next Article in Journal
Aero-Engine Ablation Defect Detection with Improved CLR-YOLOv11 Algorithm
Previous Article in Journal
Short-Term Detection of Dynamic Stress Levels in Exergaming with Wearables
Previous Article in Special Issue
Control Gain Determination Method for Robust Time-Delay Control of Industrial Robot Manipulators Based on an Improved State Observer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Traversal by Touch: Tactile-Based Robotic Traversal with Artificial Skin in Complex Environments

Department of Computer Science, Toronto Metropolitan University, Toronto, ON M5B 2K3, Canada
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(21), 6569; https://doi.org/10.3390/s25216569 (registering DOI)
Submission received: 12 September 2025 / Revised: 17 October 2025 / Accepted: 22 October 2025 / Published: 25 October 2025
(This article belongs to the Special Issue Intelligent Robots: Control and Sensing)

Abstract

We evaluate tactile-first robotic traversal on the Department of Homeland Security (DHS) figure-8 mobility test using a two-way repeated-measures design across various algorithms (three tactile policies—M1 reactive, M2 terrain-weighted, M3 memory-augmented; a monocular camera baseline, CB-V; a tactile histogram baseline, T-VFH; and an optional tactile-informed replanner, T-D* Lite) and lighting conditions (Indoor, Outdoor, and Dark). The platform is the custom-built Eleven robot—a quadruped integrating a joint-mounted tactile tentacle with a tip force-sensitive resistor (FSR; Walfront 9snmyvxw25, China; 0–10 kg range, ≈0.1 N resolution @ 83 Hz) and a woven Galvorn carbon-nanotube (CNT) yarn for proprioceptive bend sensing. Control and sensing are fully wireless via an ESP32-S3, Arduino Nano 33 BLE, Raspberry Pi 400, and a mini VESC controller. Across 660 trials, the tactile stack maintained ∼21 ms (p50) policy latency and mid-80% success across all lighting conditions, including total darkness. The memory-augmented tactile policy (M3) exhibited consistent robustness relative to the camera baseline (CB-V), trailing by only ≈3–4% in Indoor and ≈13–16% in Outdoor and Dark conditions. Pre-specified, two one-sided tests (TOSTs) confirmed no speed equivalence in any M3↔CB-V comparison. Unlike vision-based approaches, tactile-first traversal is invariant to illumination and texture—an essential capability for navigation in darkness, smoke, or texture-poor, confined environments. Overall, these results show that a tactile-first, memory-augmented control stack achieves lighting-independent traversal on DHS benchmarks while maintaining competitive latency and success, trading modest speed for robustness and sensing independence.

1. Introduction

Robots that must operate in light-starved, dust-filled, or smoke-obscured environments cannot assume reliable exteroceptive vision. Under such degraded conditions, illumination and texture cues collapse, and visual sensors fail to provide consistent spatial awareness. Therefore, tactile sensing becomes a critical modality for safe traversal and interaction, enabling robots to perceive structure through direct contact rather than reflected light. Decades of work in tactile sensing and artificial skin demonstrate that contact can encode geometry, compliance, and force for robust interaction and control [1,2,3,4,5]. In parallel, probabilistic and sampling-based motion frameworks, such as A* search (A-star) [6,7], Rapidly-Exploring Random Trees (RRT/RRT*) [8,9,10,11], and Partially Observable Markov Decision Processes (POMDPs) [12], have become foundational for navigation under uncertainty, while bio-inspired morphology and control approaches continue to enhance robustness in cluttered or confined spaces [13,14,15,16]. We also acknowledge seminal surveys on robustness, autonomous mobile robots, and human–robot interaction [17,18,19,20,21,22].
  • Problem Statement
We consider the following question: Given an arbitrary and unknown confined space accessible through an entry point, containing physical obstacles and at least one path to egress, and a robot equipped solely with touch-sensitive skin, what algorithms allow the space to be traversed and described to an external operator with no prior knowledge? This framing follows the spirit of the classic navigation and planning literature [12,23].
  • Motivation and Context
Figure 1, Figure 2 and Figure 3 show the custom-built Eleven robot navigating the standardized Department of Homeland Security (DHS) figure-8 course. The robot’s joint-mounted tactile tentacle integrates a tip FSR and a woven Galvorn carbon-nanotube (CNT) yarn for proprioceptive bend sensing. This enables robust navigation in conditions that defeat cameras—total darkness, smoke, or texture-poor surfaces [24,25].
  • Contributions
  • Tactile-first traversal stack. A memory-augmented tactile policy (M3) integrating decaying contact history with reactive clearance and terrain proxies, building on probabilistic and sampling-based foundations [9,11,12].
  • Factorial evaluation. DHS figure-8 benchmark under Indoor/Outdoor/Dark tiers comparing tactile and visual baselines; analysis connects to broader robustness surveys [17,21].
  • Latency and success. Stable 21 ms p50 latency, mid-80% success, and comprehensive logging.
  • Speed trade-offs. M3 trails CB-V by 3–4% (Indoor) and 13–16% (Outdoor/Dark); no speed equivalence per TOST.
  • Illumination independence. Tactile traversal maintains competitive performance without visual reliance, consistent with the promise of whole-body artificial skins [26,27,28,29,30,31,32,33,34].
  • Scope
Our goal is not perceptual superiority over vision but to quantify what a pure-touch stack can achieve, how memory contributes, and what trade-offs exist relative to vision-based baselines.

2. Related Work

Earlier tactile research emphasized manipulation and surface coverage [1,2,4,35]: DLR’s capacitive skin balanced sensitivity and durability [27]; Ohmura et al. achieved conformable tactile arrays for curved surfaces [29], and uSkin provided digital three-axis sensing [30]. Middleware, like SkinWare, scaled acquisition to large taxel networks [31]. Additional advances include stretchable and multimodal tactile elements and touch panel analyses [28,32,33,34]. On the planning side, we draw on deterministic and sampling-based frameworks [6,7,8,9,10,11,12,23], as well as broader surveys of motion planning, robustness, and HRI [17,18,19,20,21,22].
Representative approaches and their key characteristics are summarized in Table 1.

3. System Overview—Eleven

  • Architecture
The Eleven robot integrates actuation, sensing, and computing in a compact wireless stack:
  • Actuation: T-Motor F60 Pro V-LV (2207.5, 1950 KV; T-MOTOR Co., Ltd., Nanchang, China) brushless motor with mini VESC (FOC mode).
  • Sensing: Tip FSR (Walfront 9snmyvxw25, China; 0–10 kg range, ≈0.1 N resolution @ 83 Hz), woven Galvorn CNT flexure (DexMat, Houston, TX, USA), and magnetic encoder (Model TLE5012B, Infineon Technologies, Augsburg, Germany).
  • Compute: Raspberry Pi 400 (Raspberry Pi Ltd., Cambridge, UK; high-level policy), ESP32-S3 Touch LCD (Espressif Systems, China; local control/UI), Arduino Nano 33 BLE Rev2 (Arduino, Turin, Italy; tactile sampling).
  • Power: LiPo 11.1 V with an inline 25–40 A fuse and 5 V buck converter.
It is fully self-contained, requiring no external sensors or computing, consistent with whole-body skin concepts [26,27,28,29,30,31,32,33,34].

4. Algorithms

  • Notation and Symbols
Before presenting the M1–M3 formulations, Table 2 summarizes the key symbols and their meanings used throughout the algorithms.

4.1. M1—Reactive Tactile Traversal

M1 selects the heading with maximum clearance; d free ( θ ) denotes the derivative of free distance along a ray. Sensors. The FSR detects contact onset, the CNT flexure provides a bend estimate, and the encoder tracks joint angle ϕ [1,2].
J clr ( θ ) = 1 d free ( θ ) .
The full M1 procedure is summarized in Algorithm 1.
Algorithm 1 M1: Reactive Tactile Traversal
1:
for each tick k do
2:
   Estimate d free ( θ ) for all θ Θ
3:
   Compute J clr ( θ ) = 1 / d free ( θ )
4:
   Select θ = arg min θ J clr ( θ )
5:
   Execute ( θ , v ) with reduced v if contact detected
6:
end for

4.2. M2—Terrain-Weighted Traversal

M2 adds terrain weighting via τ ^ ( θ ) (contact roughness, stiffness, and smoothness proxies).
  • Computation of τ ^ ( θ )
At each policy tick k, and for each candidate ray θ Θ , we compute a terrain difficulty proxy τ ^ ( θ , k ) [ 0 , 1 ] as a convex combination of three normalized tactile features:
τ ^ ( θ , k ) = w r R ( θ , k ) + w k K ( θ , k ) + w s S ( θ , k ) , w r + w k + w s = 1 .
  • Roughness  R ( θ , k ) is the exponentially smoothed contact occupancy in the ray’s angular bin, obtained by projecting recent tip contacts (FSR/contact_flag) by using their contact_bearing into θ ± Δ θ / 2 and integrating contact_time_pct over a short trailing window. Stiffness K ( θ , k ) is the median slope of force vs. bend, measured during contact episodes in that bin, i.e., K median Δ FSR N / Δ ϕ meas , which increases with harder terrain. Smoothness S ( θ , k ) penalizes local oscillation via the trailing window variance of the encoder bend rate (or tracking error), S var ϕ ˙ meas , accumulated for contacts associated with the bin. Each raw feature is normalized across all rays at tick k using min–max scaling with a small ε for stability, and then combined by fixed weights ( w r , w k , w s ) locked before data collection; the M2 cost is   
    J M 2 ( θ ) = 1 d free ( θ ) + λ τ τ ^ ( θ , k ) ,
    with λ τ fixed for all experiments.
  • Sensors. Contact rate from the FSR provides a roughness proxy, flexure variance from the CNT yarn indicates local stiffness, and encoder stability contributes a smoothness proxy; together, these form τ ^ ( θ ) along candidate rays.
    J M 2 ( θ ) = 1 d free ( θ ) + λ τ τ ^ ( θ ) .
The full M2 procedure is summarized in Algorithm 2.
Algorithm 2 M2: Terrain-Weighted Traversal
1:
for each tick k do
2:
   Estimate d free ( θ ) and τ ^ ( θ )
3:
   Compute J ( θ ) = 1 / d free ( θ ) + λ τ τ ^ ( θ )
4:
   Select θ = arg min θ J ( θ )
5:
   Execute ( θ , v )
6:
end for

4.3. M3—Memory-Augmented Traversal

M3 maintains a decaying tactile memory M t ( x ) updated with contacts [12,23]. Sensors. The FSR updates the memory field M t , the CNT flexure localizes contact arcs along the arm, and the joint encoder maintains geometric consistency so that ray ( θ ) M t ( x ) d x penalizes recently contacted regions.
M t + 1 ( x ) = ( 1 ρ ) M t ( x ) + α 1 { contact at x } .
J M 3 ( θ ) = 1 d free ( θ ) + λ τ τ ^ ( θ ) + λ m ray ( θ ) M t ( x ) d x .
The full M3 procedure is summarized in Algorithm 3.
Algorithm 3 M3: Memory-Augmented Traversal
1:
Initialize M 0 ( x ) 0
2:
for each tick k do
3:
   for each θ Θ  do
4:
     Compute d free ( θ ) and τ ^ ( θ )
5:
     Evaluate J mem ( θ ) = ray ( θ ) M t ( x ) d x
6:
      J ( θ ) = 1 / d free + λ τ τ ^ + λ m J mem
7:
   end for
8:
   Select θ = arg min θ J ( θ )
9:
   Execute ( θ , v ) ; update M t + 1 ( x )
10:
end for

4.4. Design Overview

We employed a two-way factorial, repeated-measures design with factors Algorithm × Lighting. The same robot (Eleven) was used across all cells without hardware modification. Trials were conducted on the standardized Department of Homeland Security (DHS) figure-8 mobility course, following the DHS–NIST–ASTM Standard Test Methods for Response Robots [36], “Mobility: Confined Area Terrains (figure-8 path).” Lighting conditions (Indoor, Outdoor, and Dark) were alternated systematically; lux levels were logged before each trial using a handheld light meter. The robot’s onboard systems recorded all sensor and control streams while the course remained fixed. Each Algorithm × Lighting cell targeted up to 30 randomized trials per condition, with blocked scheduling to balance battery state, time of day, and operator load. Optional comparators (T-VFH and T-D* Lite) were run at reduced n under the same DHS layout.

4.5. Procedure

Each experimental trial followed the standardized DHS figure-8 course protocol.
Figure 4 summarizes the experimental procedure.

5. Results

The transition from M1 and M2 to the memory-augmented M3 policy yielded measurable qualitative and quantitative improvements on the DHS figure-8 course. M3 reduced redundant reversals and oscillations around obstacles—behaviors common in M1 and M2—by incorporating a decaying tactile memory integral that discouraged revisiting previously contacted regions. This produced smoother progress and more stable trajectories, particularly in cluttered or low-visibility conditions. Compared with the tactile histogram baseline (T-VFH), M3 achieved higher success and fewer reversals while maintaining comparable policy latency. However, speed equivalence to the monocular camera baseline (CB-V) was not established: M3 consistently trailed CB-V in rate of advance by ∼3–4% indoors and ∼13–16% outdoors and in the dark, trading throughput for robustness and illumination independence.
Figure 5 compares the rate of advance across algorithms.
Figure 6 reports commanded speed by lighting tier.
Figure 7 visualizes an example TOST result for CPU overhead (±2 pp bounds).
Figure 8 shows an example TOST result for wall-time overhead (±5% bounds).
Figure 9 presents mean command disagreement ( 2 ) by lighting.
Figure 10 evaluates the predictive validity of Tactile Traversability (TT).
Table 3 reports command disagreement by algorithm × lighting.
Table 4 summarizes command disagreement aggregated by algorithm.
Table 5 summarizes command disagreement aggregated by lighting tier.
  • Success Rates Across Lighting Conditions
Across all 660 trials, overall success was in the mid-80% range. Broken down by lighting, success for M3 was 86.7% (Indoor), 83.3% (Outdoor), and 80.0% (Dark), compared with 86.7%, 83.3%, and 83.3% for CB-V (Indoor, Outdoor, and Dark). These results indicate that M3 maintains competitive success across tiers, with its primary trade-off appearing in rate-of-advance rather than completion likelihood.
  • TOST Speed-Equivalence Test
Two one-sided tests (TOSTs) were pre-registered to assess whether M3 and CB-V were statistically equivalent in rate of advance, using equivalence bounds of ± 5 % (Indoor) and ± 8 % (Outdoor and Dark) relative to the CB-V mean. None of the M3↔CB-V comparisons met these bounds in any lighting tier. The confidence intervals of the speed differences lay entirely outside the equivalence regions, confirming no speed equivalence. The interpretation is that M3’s advantage lies in robustness, stability, and lighting independence, while CB-V retains a modest throughput edge under favorable visibility.

6. Discussion

Tactile-first traversal demonstrates clear mission relevance in domains where vision cannot be trusted. In confined inspection tasks, such as pipe or tunnel surveys, zero-visibility searches in smoke-filled structures, dust-laden subterranean rescues, or nuclear decommissioning site work, illumination and texture vary unpredictably, and reflective surfaces and particulates confound optical sensors. The custom-built Eleven platform, with its joint-mounted tactile tentacle (tip FSR: Walfront 9snmyvxw25, China; 0–10 kg range, ≈0.1 N resolution @ 83 Hz), with Galvorn CNT flexure, and distributed ESP32-S3 + Arduino Nano 33 BLE + Pi 400 control, maintained stable decision latency (≈21 ms p50) and mid-80% success across Indoor, Outdoor, and Dark tiers, confirming that mechanical contact sensing is fundamentally illumination-independent. Our analysis aligns with broader viewpoints on robustness in robotics and automated navigation [17,18,19,21,22].
  • Planning Integration
The present system couples a short-horizon, reactive tactile policy with immediate sensory feedback without global reasoning. Bridging this local memory-based approach to full kinodynamic or sampling-based planners will require efficient abstractions that merge decaying contact maps M t ( x ) with sparse global representation without duplicating computation. A plausible path is hierarchical planning, where a high-level planner (e.g., RRT*, Dijkstra/A*) operates on low-frequency, coarse occupancy updates while M3 continues to manage fine-scale reactions and short-term safety locally [6,7,9,10,11,12,23]. Short-horizon MPC offers a route to recover some throughput while preserving safety margins [37], and dedicated accelerators for learned planners are promising [38].
  • Scaling Tactile Skins
Extending M3-style control to large, body-scale tactile arrays introduces physical and software constraints. Wiring density, taxel addressing, and bandwidth scale with area, creating power and latency bottlenecks that can exceed microcontroller or bus limits. Advances such as polymer-based artificial skin with modular segments [27], uSkin’s digital three-axis taxels [30], and SkinWare middleware for distributed acquisition [31] point to viable scaling strategies. These architectures decentralize preprocessing and fault tolerance, ensuring that tactile data remain timely and robust enough for real-time control even on multi-module robots.

7. Future Work

  • Heading reversal mitigation. Logged heading reversal counts and contact bearings indicate occasional oscillations when multiple rays have near-equal clearance costs. The next step is to train a damping controller that anticipates reversals using short sequences of tactile memory snapshots and contact direction histories.
  • Repeated impact reduction. The contacts/m metric and per-trial events suggest clustered FSR activations on the same obstacle face. Learning a short-horizon predictive map that discourages re-entry into previously impacted regions could help.
  • Short-horizon MPC for speed recovery. Incorporating a short-horizon MPC that gates forward velocity by predicted contact risk from M t could recover part of the rate-of-advance gap while maintaining robustness [37].
  • Learned planners and accelerators. Contacts, reversals, and per-ray costs across 660 trials form a dataset for policy distillation and hardware-aware inference [38].
  • Multi-robot tactile exploration. Cooperative mapping via compressed tactile submaps and limited-bandwidth exchange is a compelling direction [39].

8. Hardware Panels

Figure 11 details the joint-mounted tentacle hardware (ad).

Author Contributions

Conceptualization, A.M.; Software, A.M.; Formal analysis, A.M.; Writing—review & editing, A.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available at this time due to ongoing analysis for related projects.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dahiya, R.S.; Metta, G.; Valle, M.; Sandini, G. Tactile Sensing—From Humans to Humanoids. IEEE Trans. Robot. 2010, 26, 1–20. [Google Scholar] [CrossRef]
  2. Lee, M.H.; Nicholls, H.R. Tactile sensing for mechatronics—A state of the art survey. Mechatronics 1999, 9, 1–20. [Google Scholar] [CrossRef]
  3. Fritzsche, M.; Elkmann, N.; Schulenburg, E. Tactile Sensing: A Key Technology for Safe Physical Human-Robot Interaction. In Proceedings of the 2011 ACM/IEEE International Conference on Human-Robot Interaction, Lausanne, Switzerland, 6–9 March 2011; pp. 139–140. [Google Scholar] [CrossRef]
  4. Tiwana, M.I.; Redmond, S.J.; Lovell, N.H. A review of tactile sensing technologies with applications in biomedical engineering. Sens. Actuators A Phys. 2012, 179, 17–31. [Google Scholar] [CrossRef]
  5. Lumelsky, V.J.; Shur, M.S.; Wagner, S. Sensitive Skin. IEEE Sens. J. 2001, 1, 41–51. [Google Scholar] [CrossRef]
  6. Hart, P.E.; Nilsson, N.J.; Raphael, B. A formal basis for the heuristic determination of minimum cost paths. IEEE Trans. Syst. Sci. Cybern. 1968, 4, 100–107. [Google Scholar] [CrossRef]
  7. Dijkstra, E.W. A note on two problems in connexion with graphs. Numer. Math. 1959, 1, 269–271. [Google Scholar] [CrossRef]
  8. LaValle, S.M. Rapidly-Exploring Random Trees: A New Tool for Path Planning; Technical Report TR 98-11; Computer Science Department, Iowa State University: Ames, IA, USA, 1998. [Google Scholar]
  9. Karaman, S.; Frazzoli, E. Sampling-based algorithms for optimal motion planning. Int. J. Robot. Res. 2011, 30, 846–894. [Google Scholar] [CrossRef]
  10. LaValle, S.M.; Kuffner, J.J. Randomized kinodynamic planning. Int. J. Robot. Res. 2001, 20, 378–400. [Google Scholar] [CrossRef]
  11. LaValle, S.M. Planning Algorithms; Cambridge University Press: Cambridge, UK, 2006. [Google Scholar]
  12. Thrun, S.; Burgard, W.; Fox, D. Probabilistic Robotics; MIT Press: Cambridge, MA, USA, 2005. [Google Scholar]
  13. Floreano, D.; Mattiussi, C. Bio-Inspired Artificial Intelligence: Theories, Methods, and Technologies; MIT Press: Cambridge, MA, USA, 2008. [Google Scholar]
  14. Porter, M.M.; Adriaens, D.; Hatton, R.L.; Meyers, M.A.; McKittrick, J. Why the seahorse tail is square. Science 2015, 349, aaa6683. [Google Scholar] [CrossRef] [PubMed]
  15. Schulz, A.K.; Boyle, M.; Boyle, C.; Sordilla, S.; Rincon, C.; Hooper, S.; Aubuchon, C.; Reidenberg, J.S.; Higgins, C.; Hu, D.L. Skin wrinkles and folds enable asymmetric stretch in the elephant trunk. Proc. Natl. Acad. Sci. USA 2022, 119, e2122563119. [Google Scholar] [CrossRef] [PubMed]
  16. Wang, Z.; Freris, N.M.; Wei, X. SpiRobs: Logarithmic Spiral-shaped Robots for Versatile Grasping Across Scales. Sci. Robot. 2025, 3, 100646. [Google Scholar] [CrossRef]
  17. Bretl, T.; Hutchinson, S. Robustness in Robotics: A Survey. IEEE Trans. Autom. Sci. Eng. 2013, 10, 234–242. [Google Scholar]
  18. Siegwart, R.; Nourbakhsh, I.R.; Scaramuzza, D. Introduction to Autonomous Mobile Robots, 2nd ed.; MIT Press: Cambridge, MA, USA, 2011. [Google Scholar]
  19. Siciliano, B.; Khatib, O. (Eds.) Springer Handbook of Robotics; Springer International Publishing: Cham, Switzerland, 2016. [Google Scholar]
  20. Goodrich, M.A.; Schultz, A.C. Human-Robot Interaction: A Survey. Found. Trends-Hum. Comput. Interact. 2007, 1, 203–275. [Google Scholar] [CrossRef]
  21. Gonzalez, D.; Perez, J.; Milanés, V.; Nashashibi, F. A review of motion planning techniques for automated vehicles. IEEE Trans. Intell. Transp. Syst. 2016, 17, 1135–1145. [Google Scholar] [CrossRef]
  22. Elbanhawi, M.; Simic, M. Sampling-based robot motion planning: A review. IEEE Access 2014, 2, 56–77. [Google Scholar] [CrossRef]
  23. Choset, H.; Lynch, K.M.; Hutchinson, S.; Kantor, G.; Burgard, W.; Kavraki, L.E.; Thrun, S. Principles of Robot Motion: Theory, Algorithms, and Implementations; MIT Press: Cambridge, MA, USA, 2005. [Google Scholar]
  24. Fox, D.; Burgard, W.; Thrun, S. The dynamic window approach to collision avoidance. IEEE Robot. Autom. Mag. 1997, 4, 23–33. [Google Scholar] [CrossRef]
  25. Dissanayake, M.W.M.G.; Newman, P.; Clark, S.; Durrant-Whyte, H.F.; Csorba, M. A Solution to the Simultaneous Localization and Map Building (SLAM) Problem. IEEE Trans. Robot. Autom. 2001, 17, 229–241. [Google Scholar] [CrossRef]
  26. Ulmen, J.; Cutkosky, M. A Robust, Low-Cost and Low-Noise Artificial Skin for Human-Friendly Robots. In Proceedings of the 2010 IEEE International Conference on Robotics and Automation, Anchorage, AK, USA, 3–8 May 2010; pp. 4836–4841. [Google Scholar] [CrossRef]
  27. Strohmayr, M.; Wörn, H.; Hirzinger, G. The DLR Artificial Skin: Uniting Sensitivity and Collision Tolerance. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 1012–1018. [Google Scholar] [CrossRef]
  28. O’Neill, J.; Lu, J.; Dockter, R.; Kowalewski, T. Stretchable, Flexible, Scalable Smart Skin Sensors for Robotic Position and Force Estimation. Sensors 2018, 18, 953. [Google Scholar] [CrossRef] [PubMed]
  29. Ohmura, Y.; Kuniyoshi, Y.; Nagakubo, A. Conformable and Scalable Tactile Sensor Skin for Curved Surfaces. In Proceedings of the 2006 IEEE International Conference on Robotics and Automation, Orlando, FL, USA, 15–19 May 2006; pp. 1348–1353. [Google Scholar] [CrossRef]
  30. Tomo, T.P.; Regoli, M.; Schmitz, A.; Natale, L.; Kristanto, H.; Somlor, S.; Jamone, L.; Metta, G.; Sugano, S. A New Silicone Structure for uSkin—A Soft, Distributed, Digital 3-Axis Skin Sensor and Its Integration on the Humanoid Robot iCub. IEEE Robot. Autom. Lett. 2018, 3, 2584–2591. [Google Scholar] [CrossRef]
  31. Youssefi, S.; Denei, S.; Mastrogiovanni, F.; Cannata, G. Skinware: A Real-Time Middleware for Acquisition of Tactile Data from Large Scale Robotic Skins. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation, Hong Kong, China, 31 May–7 June 2014; pp. 6421–6426. [Google Scholar] [CrossRef]
  32. Le, T.H.L.; Maslyczyk, A.; Roberge, J.P.; Duchaine, V. A Highly Sensitive Multimodal Capacitive Tactile Sensor. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation, Singapore, 29 May–3 June 2017; pp. 407–412. [Google Scholar] [CrossRef]
  33. Hoshi, T.; Shinoda, H. Robot Skin Based on Touch-Area-Sensitive Tactile Element. In Proceedings of the 2006 IEEE International Conference on Robotics and Automation, Orlando, FL, USA, 15–19 May 2006; pp. 3463–3468. [Google Scholar] [CrossRef]
  34. Lee, J.; Cole, M.T.; Lai, J.C.S.; Nathan, A. An Analysis of Electrode Patterns in Capacitive Touch Screen Panels. J. Disp. Technol. 2014, 10, 362–369. [Google Scholar] [CrossRef]
  35. Cannata, G.; Maggiali, M.; Metta, G.; Sandini, G. An Embedded Artificial Skin for Humanoid Robots. In Proceedings of the IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, 2008, Seoul, Republic of Korea, 20–22 August 2008; pp. 20–22. [Google Scholar] [CrossRef]
  36. National Institute of Standards and Technology (NIST). ASTM International E54.09. Standard Test Methods for Response Robots: Mobility—Confined Area Terrains (Figure-8 Path); Technical Report; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2016.
  37. Higgins, J.; Bezzo, N. A Model Predictive-based Motion Planning Method for Safe and Agile Traversal of Unknown and Occluding Environments. In Proceedings of the 2022 International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 23–27 May 2022; pp. 9092–9098. [Google Scholar] [CrossRef]
  38. Huang, L.; Zang, X.; Gong, Y.; Yuan, B. Hardware Architecture of Graph Neural Network-enabled Motion Planner (Invited Paper). In Proceedings of the 2022 IEEE/ACM International Conference on Computer-Aided Design (ICCAD), San Diego, CA, USA, 30 October–3 November 2022; pp. 1–7. [Google Scholar]
  39. Wu, Y.; Gu, Q.; Yu, J.; Ge, G.; Wang, J.; Liao, Q.; Zhang, C.; Wang, Y. MR-GMMExplore: Multi-Robot Exploration System in Unknown Environments based on Gaussian Mixture Model. In Proceedings of the 2022 IEEE International Conference on Robotics and Biomimetics (ROBIO), Jinghong, China, 5–9 December 2022; pp. 1198–1203. [Google Scholar] [CrossRef]
Figure 1. Side view of Eleven on the standardized DHS figure-8 course.
Figure 1. Side view of Eleven on the standardized DHS figure-8 course.
Sensors 25 06569 g001
Figure 2. Eleven negotiating the DHS “three-brick” obstacle.
Figure 2. Eleven negotiating the DHS “three-brick” obstacle.
Sensors 25 06569 g002
Figure 3. Initial contact with a brick showing FSR and CNT sensors.
Figure 3. Initial contact with a brick showing FSR and CNT sensors.
Sensors 25 06569 g003
Figure 4. Experimental procedure flow used in Section 5.
Figure 4. Experimental procedure flow used in Section 5.
Sensors 25 06569 g004
Figure 5. Rate of advance (proxy) by algorithm.
Figure 5. Rate of advance (proxy) by algorithm.
Sensors 25 06569 g005
Figure 6. Commanded speed by lighting condition.
Figure 6. Commanded speed by lighting condition.
Sensors 25 06569 g006
Figure 7. TOST: CPU overhead (example ±2 pp equivalence bounds).
Figure 7. TOST: CPU overhead (example ±2 pp equivalence bounds).
Sensors 25 06569 g007
Figure 8. TOST: Wall-time overhead (example ±5% equivalence bounds).
Figure 8. TOST: Wall-time overhead (example ±5% equivalence bounds).
Sensors 25 06569 g008
Figure 9. Mean command disagreement ( 2 ) by lighting condition.
Figure 9. Mean command disagreement ( 2 ) by lighting condition.
Sensors 25 06569 g009
Figure 10. Predictive validity of Tactile Traversability (TT) on recorded runs.
Figure 10. Predictive validity of Tactile Traversability (TT) on recorded runs.
Sensors 25 06569 g010
Figure 11. Hardware panels for the joint-mounted tentacle (ad).
Figure 11. Hardware panels for the joint-mounted tentacle (ad).
Sensors 25 06569 g011
Table 1. Representative methods vs. M3 tactile traversal.
Table 1. Representative methods vs. M3 tactile traversal.
ApproachSensorsPolicyTargetLimitations
DLR skin [27]CapacitiveReactive safetyWhole-body contactNot traversal
uSkin [30]3-axis taxelsContact mappingManipulationHardware only
SkinWare [31]DistributedMiddlewareData acquisitionNo navigation
CB-V baselineCameraReactiveTraversal in lightIllumination-dependent
This work (M3)FSR + CNTMemory-augmentedConfined traversalSlower; no equivalence
Table 2. Symbols used in M1–M3 algorithms.
Table 2. Symbols used in M1–M3 algorithms.
SymbolWhereMeaning
Θ M1–M3Set of candidate headings (rad)
θ , θ M1–M3Candidate and selected heading
d free ( θ ) M1–M3Estimated free distance along ray θ
τ ^ ( θ ) M2–M3Terrain difficulty proxy [ 0 , 1 ]
M t ( x ) M3Decaying tactile memory field
ρ M3Memory decay factor per update
λ τ , λ m M2–M3Weights on terrain and memory
vM1–M3Forward speed command
Table 3. Command disagreement by algorithm × lighting.
Table 3. Command disagreement by algorithm × lighting.
CountMeanMedianStdMinMax
AlgLight
M2Dark300.1910.1670.0740.1070.336
M3Dark600.1920.1750.0740.1080.330
M1Dark300.1960.1940.0710.1100.325
M3Outdoor600.1980.1890.0720.1080.334
M2Indoor300.2050.1860.0820.1060.328
TVFHDark300.2050.1850.0810.1060.332
M1Outdoor300.2060.1820.0770.1120.335
TDSTARLITEOutdoor300.2080.2020.0690.1120.331
M2Outdoor300.2140.2080.0850.1150.337
CBVDark600.2160.2170.0740.1070.331
M3Indoor600.2170.2160.0760.1070.339
TDSTARLITEDark300.2180.2180.0730.1130.337
CBVOutdoor300.2180.2050.0790.1110.334
TVFHIndoor300.2200.2250.0730.1090.336
CBVIndoor300.2250.2220.0770.1090.334
TVFHOutdoor300.2310.2320.0890.1120.333
M1Indoor300.2330.2450.0710.1080.334
TDSTARLITEIndoor300.2340.2670.0790.1110.337
Table 4. Command disagreement by algorithm.
Table 4. Command disagreement by algorithm.
CountMeanMedianStdMinMax
Alg
M31800.2030.1940.0740.1070.339
M2900.2030.1790.0800.1060.337
M1900.2120.2110.0740.1080.335
TVFH900.2190.2130.0810.1060.336
CBV1200.2190.2210.0750.1070.334
TDSTARLITE900.2200.2130.0740.1110.337
Table 5. Command disagreement by lighting.
Table 5. Command disagreement by lighting.
CountMeanMedianStdMinMax
Light
Dark2400.2030.1910.0740.1060.337
Outdoor2100.2110.1980.0780.1080.337
Indoor2100.2210.2190.0760.1060.339
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mazurick, A.; Ferworn, A. Traversal by Touch: Tactile-Based Robotic Traversal with Artificial Skin in Complex Environments. Sensors 2025, 25, 6569. https://doi.org/10.3390/s25216569

AMA Style

Mazurick A, Ferworn A. Traversal by Touch: Tactile-Based Robotic Traversal with Artificial Skin in Complex Environments. Sensors. 2025; 25(21):6569. https://doi.org/10.3390/s25216569

Chicago/Turabian Style

Mazurick, Adam, and Alex Ferworn. 2025. "Traversal by Touch: Tactile-Based Robotic Traversal with Artificial Skin in Complex Environments" Sensors 25, no. 21: 6569. https://doi.org/10.3390/s25216569

APA Style

Mazurick, A., & Ferworn, A. (2025). Traversal by Touch: Tactile-Based Robotic Traversal with Artificial Skin in Complex Environments. Sensors, 25(21), 6569. https://doi.org/10.3390/s25216569

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop