Image Segmentation-Guided Visual Tracking on a Bio-Inspired Quadruped Robot
Abstract
1. Introduction
- On the perception side, we introduce a cascaded neural network equipped with a global information guidance module, which effectively integrates low-level texture details and high-level semantic features across layers, overcoming the limitations of single-scale feature extraction. This design enhances segmentation accuracy, particularly in visually cluttered or blurred environments.
- On the control side, high-level information is incorporated into a biologically inspired central pattern generator (CPG) model to generate coordinated limb and spinal trajectories, enabling comprehensive motion adaptability in dynamically changing conditions. The segmentation results directly inform visual tracking and influence control decisions, creating a closed-loop visual-motor system.
- We conducted extensive evaluations on standard image segmentation datasets and robotic tracking tasks to validate our approach. The results demonstrate that our method outperforms existing approaches in segmentation accuracy and motion flexibility, highlighting its potential for real-time robotic navigation in complex environments.
2. Related Work
2.1. Image Segmentation
2.2. Low-Level Gait Controller
3. Method
3.1. Image Segmentation
3.1.1. Cascaded Information Interaction Network
3.1.2. Global Information Guidance Module
3.2. Visual Servo Controller
3.3. CPG-Based Low-Level Gait Control
4. Experimental Results
4.1. Image Segmentation
4.1.1. Experimental Setup
4.1.2. Comparisons to the State of the Art
4.1.3. Speed Analysis
4.2. Low-Level Controller
4.3. Visual Tracking
5. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Wang, T.; Wu, Z.; Wang, D. Visual perception generalization for vision-and-language navigation via meta-learning. IEEE Trans. Neural Netw. Learn. Syst. 2021, 34, 5193–5199. [Google Scholar] [CrossRef]
- Zhao, X.; Wang, L.; Zhang, Y.; Han, X.; Deveci, M.; Parmar, M. A review of convolutional neural networks in computer vision. Artif. Intell. Rev. 2024, 57, 99. [Google Scholar] [CrossRef]
- Zhang, Y.; Wen, L.; Hong, L.; Zhang, L.; Guo, Q.; Li, S.; Bing, Z.; Knoll, A. Safety-Critical Control with Saliency Detection for Mobile Robots in Dynamic Multi-Obstacle Environments. In Proceedings of the 2025 IEEE International Conference on Robotics and Automation (ICRA); IEEE: Piscataway, NJ, USA, 2025; pp. 7756–7762. [Google Scholar]
- Liu, Z.; Liu, Y.; Fang, Y.; Guo, X. Autonomous Visual Navigation with Head Stabilization Control for a Salamander-Like Robot. IEEE/ASME Trans. Mechatron. 2025. early access. [Google Scholar]
- Roberts, R.; Ta, D.N.; Straub, J.; Ok, K.; Dellaert, F. Saliency detection and model-based tracking: A two part vision system for small robot navigation in forested environment. In Proceedings of the Unmanned Systems Technology XIV; SPIE: Bellingham, WA, USA, 2012; Volume 8387, pp. 306–317. [Google Scholar]
- Zhang, D.; Han, J.; Zhang, Y.; Xu, D. Synthesizing supervision for learning deep saliency network without human annotation. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 42, 1755–1769. [Google Scholar] [CrossRef] [PubMed]
- Zhu, G.; Li, J.; Guo, Y. PriorNet: Two Deep Prior Cues for Salient Object Detection. IEEE Trans. Multimed. 2024, 26, 5523–5535. [Google Scholar] [CrossRef]
- Zheng, Q.; Zheng, L.; Deng, J.; Li, Y.; Shang, C.; Shen, Q. Transformer-based hierarchical dynamic decoders for salient object detection. Knowl.-Based Syst. 2023, 282, 111075. [Google Scholar] [CrossRef]
- Zhornyak, L.; Emami, M.R. Gait optimization for quadruped rovers. Robotica 2020, 38, 1263–1287. [Google Scholar] [CrossRef]
- Gangapurwala, S.; Mitchell, A.; Havoutis, I. Guided constrained policy optimization for dynamic quadrupedal robot locomotion. IEEE Robot. Autom. Lett. 2020, 5, 3642–3649. [Google Scholar] [CrossRef]
- Lee, G.; Tai, Y.W.; Kim, J. Deep Saliency with Encoded Low level Distance Map and High Level Features. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; IEEE: Piscataway, NJ, USA, 2016. [Google Scholar]
- Xu, J.; Liu, Z.A.; Hou, Y.K.; Zhen, X.T.; Shao, L.; Cheng, M.M. Pixel-Level Non-local Image Smoothing With Objective Evaluation. IEEE Trans. Multimed. 2021, 23, 4065–4078. [Google Scholar] [CrossRef]
- Wang, T.; Zhang, L.; Wang, S.; Lu, H.; Yang, G.; Ruan, X.; Borji, A. Detect Globally, Refine Locally: A Novel Approach to Saliency Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; IEEE: Piscataway, NJ, USA, 2018; pp. 3127–3135. [Google Scholar]
- Wang, W.; Shen, J.; Cheng, M.M.; Shao, L. An Iterative and Cooperative Top-Down and Bottom-Up Inference Network for Salient Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; IEEE: Piscataway, NJ, USA, 2019. [Google Scholar]
- Wu, Z.; Su, L.; Huang, Q. Cascaded Partial Decoder for Fast and Accurate Salient Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; IEEE: Piscataway, NJ, USA, 2019. [Google Scholar]
- Liu, J.J.; Hou, Q.; Cheng, M.M.; Feng, J.; Jiang, J. A Simple Pooling-Based Design for Real-Time Salient Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; IEEE: Piscataway, NJ, USA, 2019. [Google Scholar]
- Zhao, X.; Pang, Y.; Zhang, L.; Lu, H.; Zhang, L. Suppress and Balance: A Simple Gated Network for Salient Object Detection. In Proceedings of the European Conference on Computer Vision; Springer: Cham, Switzerland, 2020. [Google Scholar]
- Chang, Y.; Liu, Z.; Wu, Y.; Fang, Y. Deep-Learning-Based Automated Morphology Analysis with Atomic Force Microscopy. IEEE Trans. Autom. Sci. Eng. 2024, 21, 7662–7673. [Google Scholar] [CrossRef]
- Liu, N.; Han, J.; Yang, M.H. PiCANet: Learning Pixel-wise Contextual Attention for Saliency Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; IEEE: Piscataway, NJ, USA, 2018; pp. 3089–3098. [Google Scholar]
- Zhang, X.; Wang, T.; Qi, J.; Lu, H.; Wang, G. Progressive Attention Guided Recurrent Network for Salient Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; IEEE: Piscataway, NJ, USA, 2018; pp. 714–722. [Google Scholar]
- Wang, W.; Zhao, S.; Shen, J.; Hoi, S.C.; Borji, A. Salient Object Detection with Pyramid Attention and Salient Edges. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; IEEE: Piscataway, NJ, USA, 2019. [Google Scholar]
- Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; Ommer, B. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; IEEE: Piscataway, NJ, USA, 2022; pp. 10684–10695. [Google Scholar]
- Liu, Z.; Liu, Y.; Fang, Y. Diffusion Model-Based Path Follower for a Salamander-Like Robot. IEEE Trans. Neural Netw. Learn. Syst. 2025, 36, 14399–14413. [Google Scholar] [CrossRef]
- Zhang, J.; Fan, D.P.; Dai, Y.; Anwar, S.; Saleh, F.; Aliakbarian, S.; Barnes, N. Uncertainty inspired RGB-D saliency detection. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 5761–5779. [Google Scholar] [CrossRef] [PubMed]
- Wang, C.; Dong, S.; Zhao, X.; Papanastasiou, G.; Zhang, H.; Yang, G. SaliencyGAN: Deep learning semisupervised salient object detection in the fog of IoT. IEEE Trans. Ind. Inform. 2019, 16, 2667–2676. [Google Scholar] [CrossRef]
- Sun, K.; Chen, Z.; Lin, X.; Sun, X.; Liu, H.; Ji, R. Conditional diffusion models for camouflaged and salient object detection. IEEE Trans. Pattern Anal. Mach. Intell. 2025, 47, 2833–2848. [Google Scholar] [CrossRef]
- Yang, Y.; Zhang, T.; Coumans, E.; Tan, J.; Boots, B. Fast and efficient locomotion via learned gait transitions. In Proceedings of the Conference on Robot Learning; PMLR: Cambridge, MA, USA, 2022; pp. 773–783. [Google Scholar]
- Lee, J.; Kim, J.; Ubellacker, W.; Molnar, T.G.; Ames, A.D. Safety-critical Control of Quadrupedal Robots with Rolling Arms for Autonomous Inspection of Complex Environments. arXiv 2023, arXiv:2312.07778. [Google Scholar] [CrossRef]
- Liu, K.; Dong, L.; Tan, X.; Zhang, W.; Zhu, L. Optimization-Based Flocking Control and MPC-Based Gait Synchronization Control for Multiple Quadruped Robots. IEEE Robot. Autom. Lett. 2024, 9, 1929–1936. [Google Scholar] [CrossRef]
- Ding, Y.; Pandala, A.; Li, C.; Shin, Y.H.; Park, H.W. Representation-Free Model Predictive Control for Dynamic Motions in Quadrupeds. IEEE Trans. Robot. 2021, 37, 1154–1171. [Google Scholar] [CrossRef]
- Bjelonic, M.; Grandia, R.; Harley, O.; Galliard, C.; Zimmermann, S.; Hutter, M. Whole-Body MPC and Online Gait Sequence Generation for Wheeled-Legged Robots. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); IEEE: Piscataway, NJ, USA, 2021; pp. 8388–8395. [Google Scholar] [CrossRef]
- Wang, J.; Hu, C.; Zhu, Y. CPG-based hierarchical locomotion control for modular quadrupedal robots using deep reinforcement learning. IEEE Robot. Autom. Lett. 2021, 6, 7193–7200. [Google Scholar] [CrossRef]
- Sleiman, J.P.; Farshidian, F.; Minniti, M.V.; Hutter, M. A unified mpc framework for whole-body dynamic locomotion and manipulation. IEEE Robot. Autom. Lett. 2021, 6, 4688–4695. [Google Scholar] [CrossRef]
- Tsounis, V.; Alge, M.; Lee, J.; Farshidian, F.; Hutter, M. Deepgait: Planning and control of quadrupedal gaits using deep reinforcement learning. IEEE Robot. Autom. Lett. 2020, 5, 3699–3706. [Google Scholar] [CrossRef]
- Bellegarda, G.; Ijspeert, A. CPG-RL: Learning central pattern generators for quadruped locomotion. IEEE Robot. Autom. Lett. 2022, 7, 12547–12554. [Google Scholar] [CrossRef]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE International Conference on Computer Vision; IEEE: Piscataway, NJ, USA, 2021; pp. 10012–10022. [Google Scholar]
- Yan, Q.; Xu, L.; Shi, J.; Jia, J. Hierarchical saliency detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; IEEE: Piscataway, NJ, USA, 2013; pp. 1155–1162. [Google Scholar]
- Li, Y.; Hou, X.; Koch, C.; Rehg, J.M.; Yuille, A.L. The secrets of salient object segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; IEEE: Piscataway, NJ, USA, 2014; pp. 280–287. [Google Scholar]
- Yang, C.; Zhang, L.; Lu, H.; Ruan, X.; Yang, M.H. Saliency detection via graph-based manifold ranking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; IEEE: Piscataway, NJ, USA, 2013; pp. 3166–3173. [Google Scholar]
- Li, G.; Yu, Y. Visual saliency based on multiscale deep features. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; IEEE: Piscataway, NJ, USA, 2015; pp. 5455–5463. [Google Scholar]
- Wang, L.; Lu, H.; Wang, Y.; Feng, M.; Wang, D.; Yin, B.; Ruan, X. Learning to detect salient objects with image-level supervision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; IEEE: Piscataway, NJ, USA, 2017; pp. 136–145. [Google Scholar]
- Wu, R.; Feng, M.; Guan, W.; Wang, D.; Lu, H.; Ding, E. A Mutual Learning Method for Salient Object Detection with Intertwined Multi-Supervision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; IEEE: Piscataway, NJ, USA, 2019. [Google Scholar]
- Qin, X.; Zhang, Z.; Huang, C.; Gao, C.; Dehghan, M.; Jagersand, M. BASNet: Boundary-Aware Salient Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; IEEE: Piscataway, NJ, USA, 2019. [Google Scholar]
- Gao, S.H.; Tan, Y.Q.; Cheng, M.M.; Lu, C.; Chen, Y.; Yan, S. Highly Efficient Salient Object Detection with 100K Parameters. In Proceedings of the European Conference on Computer Vision; Springer: Cham, Switzerland, 2020. [Google Scholar]
- Pang, Y.; Zhao, X.; Zhang, L.; Lu, H. Multi-Scale Interactive Network for Salient Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; IEEE: Piscataway, NJ, USA, 2020; pp. 9413–9422. [Google Scholar]
- Zhou, H.; Xie, X.; Lai, J.H.; Chen, Z.; Yang, L. Interactive Two-Stream Decoder for Accurate and Fast Saliency Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; IEEE: Piscataway, NJ, USA, 2020; pp. 9141–9150. [Google Scholar]
- Liu, N.; Zhang, N.; Wan, K.; Shao, L.; Han, J. Visual Saliency Transformer. In Proceedings of the IEEE International Conference on Computer Vision, October 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 4722–4732. [Google Scholar]
- Zhang, M.; Liu, T.; Piao, Y.; Yao, S.; Lu, H. Auto-MSFNet: Search Multi-scale Fusion Network for Salient Object Detection. In Proceedings of the ACM Multimedia Conference; ACM: New York, NY, USA, 2021. [Google Scholar]
- Liu, J.J.; Liu, Z.A.; Peng, P.; Cheng, M.M. Rethinking the U-shape structure for salient object detection. IEEE Trans. Image Process. 2021, 30, 9030–9042. [Google Scholar] [CrossRef] [PubMed]
- Liu, J.J.; Hou, Q.; Liu, Z.A.; Cheng, M.M. Poolnet+: Exploring the potential of pooling for salient object detection. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 887–904. [Google Scholar] [CrossRef] [PubMed]
- Wu, Z.; Su, L.; Huang, Q. Decomposition and Completion Network for Salient Object Detection. IEEE Trans. Image Process. 2021, 30, 6226–6239. [Google Scholar] [CrossRef]
- Yao, Z.; Wang, L. Boundary Information Progressive Guidance Network for Salient Object Detection. IEEE Trans. Multimed. 2022, 24, 4236–4249. [Google Scholar] [CrossRef]
- Ke, Y.Y.; Tsubono, T. Recursive contour-saliency blending network for accurate salient object detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision; IEEE: Piscataway, NJ, USA, 2022; pp. 2940–2950. [Google Scholar]
- Liu, Z.A.; Liu, J.J. Towards efficient salient object detection via U-shape architecture search. Knowl.-Based Syst. 2025, 318, 113515. [Google Scholar] [CrossRef]
- Liu, Y.; Cheng, M.M.; Zhang, X.Y.; Nie, G.Y.; Wang, M. DNA: Deeply Supervised Nonlinear Aggregation for Salient Object Detection. IEEE Trans. Cybern. 2022, 52, 6131–6142. [Google Scholar] [CrossRef]
- Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; IEEE: Piscataway, NJ, USA, 2017; pp. 2117–2125. [Google Scholar]
- Horvat, T.; Melo, K.; Ijspeert, A.J. Spine Controller for a Sprawling Posture Robot. IEEE Robot. Autom. Lett. 2017, 2, 1195–1202. [Google Scholar] [CrossRef]








| Item | Setting |
|---|---|
| Training epochs (rounds) | 60 |
| Batch size | 30 |
| Optimizer | SGD |
| Initial learning rate | 0.005 |
| Learning rate schedule | Fixed (no decay) |
| Momentum | 0.9 |
| Weight decay | |
| Input resize | 384 × 384 (train/test) |
| Normalization | Standard dataset normalization |
| Data augmentation | None |
| Method | ECSSD | PASCAL-S | DUT-OMRON | HKU-IS | DUTS-TE | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ↑ | MAE↓ | ↑ | ↑ | MAE↓ | ↑ | ↑ | MAE↓ | ↑ | ↑ | MAE↓ | ↑ | ↑ | MAE↓ | ↑ | |
| PAGR [20] | 0.927 | 0.061 | 0.889 | 0.847 | 0.089 | 0.822 | 0.771 | 0.071 | 0.775 | 0.919 | 0.047 | 0.889 | 0.854 | 0.055 | 0.839 |
| DGRL [13] | 0.922 | 0.041 | 0.903 | 0.844 | 0.072 | 0.836 | 0.774 | 0.062 | 0.806 | 0.910 | 0.036 | 0.895 | 0.828 | 0.049 | 0.842 |
| PiCANet [19] | 0.935 | 0.047 | 0.917 | 0.864 | 0.075 | 0.854 | 0.820 | 0.064 | 0.830 | 0.920 | 0.044 | 0.904 | 0.863 | 0.050 | 0.868 |
| MLMS [42] | 0.930 | 0.045 | 0.911 | 0.853 | 0.074 | 0.844 | 0.793 | 0.063 | 0.809 | 0.922 | 0.039 | 0.907 | 0.854 | 0.048 | 0.862 |
| PAGE [21] | 0.931 | 0.042 | 0.912 | 0.848 | 0.076 | 0.842 | 0.791 | 0.062 | 0.825 | 0.920 | 0.036 | 0.904 | 0.838 | 0.051 | 0.855 |
| ICTB [14] | 0.938 | 0.041 | 0.918 | 0.855 | 0.071 | 0.850 | 0.811 | 0.060 | 0.837 | 0.925 | 0.037 | 0.909 | 0.855 | 0.043 | 0.865 |
| CPD [15] | 0.939 | 0.037 | 0.918 | 0.859 | 0.071 | 0.848 | 0.796 | 0.056 | 0.825 | 0.925 | 0.034 | 0.907 | 0.865 | 0.043 | 0.869 |
| BASNet [43] | 0.942 | 0.037 | 0.916 | 0.857 | 0.076 | 0.838 | 0.811 | 0.057 | 0.836 | 0.930 | 0.033 | 0.908 | 0.860 | 0.047 | 0.866 |
| PoolNet [16] | 0.944 | 0.039 | 0.921 | 0.865 | 0.075 | 0.850 | 0.830 | 0.055 | 0.836 | 0.934 | 0.032 | 0.917 | 0.886 | 0.040 | 0.883 |
| CSNet [44] | 0.944 | 0.038 | 0.921 | 0.866 | 0.073 | 0.851 | 0.821 | 0.055 | 0.831 | 0.930 | 0.033 | 0.911 | 0.881 | 0.040 | 0.879 |
| GateNet [17] | 0.946 | 0.040 | 0.920 | 0.877 | 0.068 | 0.858 | 0.831 | 0.055 | 0.838 | 0.935 | 0.033 | 0.915 | 0.889 | 0.040 | 0.885 |
| MINet [45] | 0.947 | 0.034 | 0.925 | 0.874 | 0.064 | 0.856 | 0.826 | 0.056 | 0.833 | 0.936 | 0.028 | 0.920 | 0.888 | 0.037 | 0.884 |
| ITSD [46] | 0.947 | 0.035 | 0.925 | 0.871 | 0.066 | 0.859 | 0.823 | 0.061 | 0.840 | 0.933 | 0.031 | 0.916 | 0.883 | 0.041 | 0.885 |
| VST [47] | 0.951 | 0.034 | 0.932 | 0.875 | 0.062 | 0.872 | 0.829 | 0.058 | 0.850 | 0.942 | 0.030 | 0.929 | 0.891 | 0.037 | 0.896 |
| MSFNet [48] | 0.943 | 0.033 | 0.915 | 0.865 | 0.061 | 0.852 | 0.824 | 0.050 | 0.832 | 0.930 | 0.027 | 0.909 | 0.881 | 0.034 | 0.877 |
| CII [49] | 0.950 | 0.034 | 0.926 | 0.882 | 0.062 | 0.865 | 0.831 | 0.054 | 0.839 | 0.939 | 0.029 | 0.920 | 0.890 | 0.036 | 0.888 |
| PoolNet+ [50] | 0.949 | 0.040 | 0.925 | 0.879 | 0.068 | 0.864 | 0.831 | 0.056 | 0.842 | 0.941 | 0.034 | 0.921 | 0.894 | 0.039 | 0.890 |
| DCN [51] | 0.952 | 0.031 | 0.928 | 0.872 | 0.062 | 0.861 | 0.823 | 0.051 | 0.845 | 0.940 | 0.027 | 0.922 | 0.894 | 0.035 | 0.891 |
| DNA [55] | 0.940 | 0.043 | 0.915 | 0.855 | 0.079 | 0.837 | 0.803 | 0.063 | 0.818 | 0.927 | 0.036 | 0.905 | 0.873 | 0.046 | 0.860 |
| RCSB [53] | 0.945 | 0.033 | 0.922 | 0.879 | 0.059 | 0.860 | 0.849 | 0.049 | 0.835 | 0.939 | 0.027 | 0.918 | 0.897 | 0.035 | 0.881 |
| PriorNet [7] | 0.953 | 0.031 | 0.931 | 0.881 | 0.059 | 0.869 | 0.839 | 0.051 | 0.849 | 0.940 | 0.029 | 0.920 | 0.901 | 0.033 | 0.897 |
| NASAL [54] | 0.925 | 0.052 | 0.904 | 0.836 | 0.092 | 0.825 | 0.800 | 0.069 | 0.818 | 0.913 | 0.044 | 0.898 | 0.833 | 0.060 | 0.841 |
| 0.952 | 0.028 | 0.933 | 0.888 | 0.054 | 0.879 | 0.842 | 0.049 | 0.858 | 0.943 | 0.025 | 0.929 | 0.898 | 0.031 | 0.900 | |
| Method | Station | Laptop |
|---|---|---|
| FPN [56] | 13.48 ms | 44.15 ms |
| Proposed | 15.22 ms | 45.26 ms |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Xiao, H.; Ma, G.; Wu, W. Image Segmentation-Guided Visual Tracking on a Bio-Inspired Quadruped Robot. Biomimetics 2026, 11, 234. https://doi.org/10.3390/biomimetics11040234
Xiao H, Ma G, Wu W. Image Segmentation-Guided Visual Tracking on a Bio-Inspired Quadruped Robot. Biomimetics. 2026; 11(4):234. https://doi.org/10.3390/biomimetics11040234
Chicago/Turabian StyleXiao, Hewen, Guangfu Ma, and Weiren Wu. 2026. "Image Segmentation-Guided Visual Tracking on a Bio-Inspired Quadruped Robot" Biomimetics 11, no. 4: 234. https://doi.org/10.3390/biomimetics11040234
APA StyleXiao, H., Ma, G., & Wu, W. (2026). Image Segmentation-Guided Visual Tracking on a Bio-Inspired Quadruped Robot. Biomimetics, 11(4), 234. https://doi.org/10.3390/biomimetics11040234
