Next Article in Journal
Compound Jamming Recognition Under Low JNR Setting Based on a Dual-Branch Residual Fusion Network
Previous Article in Journal
Towards Sensor-Based Mobility Assessment for Older Adults: A Multimodal Framework Integrating PoseNet Gait Dynamics and InBody Composition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

TFP-YOLO: Obstacle and Traffic Sign Detection for Assisting Visually Impaired Pedestrians

School of Science, Beijing Information Science and Technology University, Beijing 100192, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(18), 5879; https://doi.org/10.3390/s25185879
Submission received: 2 September 2025 / Revised: 16 September 2025 / Accepted: 17 September 2025 / Published: 19 September 2025
(This article belongs to the Section Intelligent Sensors)

Abstract

With the increasing demand for intelligent mobility assistance among the visually impaired, machine guide dogs based on computer vision have emerged as an effective alternative to traditional guide dogs, owing to their flexible deployment and scalability. To enhance their visual perception capabilities in complex urban environments, this paper proposes an improved YOLOv8-based detection algorithm, termed TFP-YOLO, designed to recognize traffic signs such as traffic lights and crosswalks, as well as small obstacle objects including pedestrians and bicycles, thereby improving the target detection performance of machine guide dogs in complex road scenarios. The proposed algorithm incorporates a Triplet Attention mechanism into the backbone network to strengthen the perception of key regions, and integrates a Triple Feature Encoding (TFE) module to achieve collaborative extraction of both local and global features. Additionally, a P2 detection head is introduced to improve the accuracy of small object detection, particularly for traffic lights. Furthermore, the WIoU loss function is adopted to enhance training stability and the model’s generalization capability. Experimental results demonstrate that the proposed algorithm achieves a detection accuracy of 93.9% and a precision of 90.2%, while reducing the number of parameters by 17.2%. These improvements significantly enhance the perception performance of machine guide dogs in identifying traffic information and obstacles, providing strong technical support for subsequent path planning and embedded deployment, and demonstrating considerable practical application value.
Keywords: computer vision; machine guide dog; object detection; YOLOv8 computer vision; machine guide dog; object detection; YOLOv8

Share and Cite

MDPI and ACS Style

Zheng, Z.; Cheng, J.; Jin, F. TFP-YOLO: Obstacle and Traffic Sign Detection for Assisting Visually Impaired Pedestrians. Sensors 2025, 25, 5879. https://doi.org/10.3390/s25185879

AMA Style

Zheng Z, Cheng J, Jin F. TFP-YOLO: Obstacle and Traffic Sign Detection for Assisting Visually Impaired Pedestrians. Sensors. 2025; 25(18):5879. https://doi.org/10.3390/s25185879

Chicago/Turabian Style

Zheng, Zhiwei, Jin Cheng, and Fanghua Jin. 2025. "TFP-YOLO: Obstacle and Traffic Sign Detection for Assisting Visually Impaired Pedestrians" Sensors 25, no. 18: 5879. https://doi.org/10.3390/s25185879

APA Style

Zheng, Z., Cheng, J., & Jin, F. (2025). TFP-YOLO: Obstacle and Traffic Sign Detection for Assisting Visually Impaired Pedestrians. Sensors, 25(18), 5879. https://doi.org/10.3390/s25185879

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop