Next Article in Journal
Computational Insights into Thiosemicarbazone Metal Complexes: Structural Elucidation, Reactivity Patterns, and Biomedical Implications
Previous Article in Journal
UAV-Based Multi-Sensor Data Fusion for 3D Building Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

The Design of a Mobile Sensing Framework for Road Surfaces Based on Multi-Modal Sensors †

School of Internet of Things, Nanjing University of Posts and Telecommunications, Nanjing 210023, China
*
Author to whom correspondence should be addressed.
Presented at the 31st International Conference on Geoinformatics, Toronto, ON, Canada, 14–16 August 2024.
Proceedings 2024, 110(1), 21; https://doi.org/10.3390/proceedings2024110021
Published: 11 December 2024
(This article belongs to the Proceedings of The 31st International Conference on Geoinformatics)

Abstract

:
Road surface information, encompassing aspects like road surface damages and facility distributions, is vital for maintaining and updating roads in smart cities. The proposed mobile sensing framework uses multi-modal sensors, including a GPS, gyroscope, accelerometer, camera, and Wi-Fi, integrated into a Jetson Nano to collect comprehensive road surface information. The collected data are processed, stored, and analyzed on the server side, with results accessible via RESTful APIs. This system enables the detection of road conditions, which are visualized through the web mapping technique. Based on this concept, the Mobile Sensor Framework for Road Surface analysis (MSF4RS) is designed, and its use significantly enhances road surface data acquisition and analysis. Key contributions include (1) the integration of multi-modal IoT sensors to capture comprehensive road surface data; (2) the development of a software environment that facilitates robust data processing; and (3) the execution of experiments using the MSF4RS, which synergistically combines hardware and software components. The framework leverages advanced sensor technologies and server-based computational methods and offers a user-friendly web interface for the dynamic visualization and interactive exploration of road surface conditions. Experiments confirm the framework’s effectiveness in capturing and visualizing road surface data, demonstrating significant potential for smart city applications.

1. Introduction

In the current digital age, the systematic acquisition and analysis of road data are paramount to the development of smart cities and intelligent transportation systems. Road networks, as critical infrastructure, not only physically connect diverse urban areas but also facilitate economic and social exchanges, underpinning daily life and commercial activity. However, vehicular traffic inevitably compromises the integrity of these networks, leading to surface damage such as potholes, cracks, and other distresses [1,2]. Furthermore, the deterioration of critical urban infrastructure components, including speed bumps and drainage systems, poses significant risks to vehicle safety [3,4]. Therefore, monitoring bump features on road surfaces (BFRS) is crucial for maintaining a safe and efficient transportation system.
Current road surface information extraction technologies can be categorized into three primary types based on their data acquisition methods and research approaches [5,6,7]. The first category encompasses image-based methods, which rely on images captured by onboard cameras. These methods leverage advanced computer vision and machine learning algorithms to identify textural features within the images, enabling accurate road surface recognition [8,9,10]. For instance, Li et al. employed a Back-Propagation Neural Network (BPNN) to detect cracks [11], while Robet et al. proposed a refined U-net architecture incorporating Atrous Spatial Pyramid Pooling (ASPP) to enhance contextual understanding and extraction accuracy [12]. However, external factors such as weather conditions and road shadows can degrade image quality, negatively impacting extraction results. The second category comprises methods employing specialized equipment like LiDAR and photogrammetry, which significantly enhance data collection accuracy [13,14]. For instance, Bhatt et al. integrated Terrain-based Laser Scanning (TLS) with a GPS, cameras, and accelerometers to detect road anomalies [15]. Similarly, Sharma et al. utilized ultrasonic sensors and Dynamic Time Warping (DTW)-based image processing to identify road damage [16]. Despite their advantages, the high costs and specialized expertise required for these methods hinder their wider adoption. The third category consists of sensor-based methods, which utilize onboard sensors to capture anomalous vibration signals generated by vehicle–road interactions [17,18,19]. These signals are then processed using sophisticated algorithms to extract parameters such as acceleration and are often integrated with machine learning techniques for effective feature recognition. For example, Du et al. employed an advanced Gaussian background model combined with a k-nearest neighbor approach for feature classification [20]. Similarly, Chen et al. transformed sensor data into the frequency domain for analysis, utilizing an energy entropy-based algorithm to detect anomalies like potholes and bumps [21]. Multisensory road information acquisition techniques have been widely researched and deployed, demonstrably enhancing both the accuracy and reliability of data collection. Integrating diverse data sources, such as GPS, video, and accelerometer data, facilitates comprehensive road information acquisition, thereby supporting the maintenance and development of robust and resilient transportation systems.
To detect BFRS using these multi-modal sensors, this article presents a novel Multi-modal Sensor Framework for Road Surface analysis (MSF4RS), integrating both hardware and software components. The MSF4RS leverages state-of-the-art sensor technologies to optimize road surface data acquisition, incorporates advanced server-based computational methods for robust data processing, and provides a web-based interface for the effective visualization and interactive exploration of road surface information.

2. Methodology

2.1. Overall Design of Mobile Sensing Framework

The MSF4RS is engineered to meet the escalating requirements for automated road data acquisition. As a critical foundation for applications in road surface data collection, the architectural design of the MSF4RS is delineated into three principal components: data collection, server processing, and data visualization, as illustrated in Figure 1. This tripartite structure ensures an exhaustive management of the entire workflow—from the initial data capture through to the final user interaction—thereby providing a robust infrastructure for the transportation systems.

2.2. Sensor Integration and Data Collection

The MSF4RS is equipped with a comprehensive array of multi-modal sensors, including a GPS, IMU, camera, Wi-Fi, and a Jetson Nano processing board. This integrated configuration allows the framework to efficiently acquire and synthesize extensive road surface data from diverse sensors. This robust and cohesive data collection approach is essential for the effective management and analysis of road conditions. Within the MSF4RS, the GPS sensor captures precise location data, formatted as <time, latitude, longitude>. The IMU connects to the Jetson Nano via a USB to TTL converter, with data formatted as <3D axis acceleration, 4-dimensional quaternions>. The camera sensor, employed for capturing two-dimensional road surface images, connects through a USB connector and offers a 120-degree wide-angle field of view, with data formatted as <time, BFRS type>. The Jetson Nano serves as the central edge computing device, interfacing with sensors such as the GPS, IMU, and camera through USB connections. Additionally, a Wi-Fi module facilitates wireless data transmission, with detection results transmitted to the database server using the MQTT protocol.

2.3. Data Transmitting and Server Processing

Sensor data are transmitted in real-time to the server using the MQTT protocol and are received by EMQX, a high-performance message broker that serves as a central hub. This broker effectively manages multiple sensor data streams, directing them to the appropriate processing nodes based on predefined topics. Sensor data retrieval from EMQX is facilitated through subscriptions managed by Django Web services which trigger a sequence of data processing operations. These operations commence with data cleaning, which includes noise removal and the imputation of missing values to preserve data integrity. Following this, data transformation is performed to align the data format with the requirements of storage and subsequent applications. The processed data are then stored in a relational database designed to support complex queries and analyses. Additionally, Django provides a suite of REST APIs that enable web clients to access and utilize the data, thereby supporting flexible data visualization and application integration.

2.4. Web Visualization and Interaction

The web interface is crafted as a platform for road information management and analysis, integrating map services like OSM and Google Maps to visualize road information and detection results. Users can interact with the platform through operations such as zooming and dragging to explore road conditions in various locations. The platform facilitates the viewing and management of both raw data and identification results, providing features to filter and export data based on criteria such as time range, geographic location, and data type. To accommodate the varied needs of different usage scenarios, the interface includes capabilities for flexible system parameter configuration. Users are able to adjust settings such as the data collection frequency and model recognition thresholds to meet specific real-world demands. Moreover, the application of Node.js technology enables the real-time pushing of data, ensuring that the latest road detection results and analytical data are promptly visualized.

3. Experiment

3.1. Multi-Modal Sensor Configuration

The initial setup involves mounting the GPS antenna on the top of the vehicle to optimize satellite signal reception. The antenna, attached via a magnetic base, transmits data to the onboard GPS sensor board through a USB interface. The IMU sensor (JY901B, manufactured by WIT Motion at Shenzhen, China) is connected to the Jetson Nano using a USB to TTL converter, which facilitates synchronized data collection from the various sensors. A USB camera, positioned on the vehicle’s rear-view mirror, captures images of the road surface along the driving path. Furthermore, to ensure a stable network connection, a Wi-Fi module is integrated into the Jetson Nano. The vehicle’s power supply provides ample electrical support for the entire data collection framework. The sampling rates are established as follows: 1 Hz for GPS, 100 Hz for IMU, and 30 frames per second (FPS) for the camera. The configuration and integration of these multi-modal sensors are depicted in Figure 2.

3.2. Road Surface Information Detection

To detect BFRS using multi-modal sensor data, the YOLO v5 model is utilized for analyzing video images to identify uneven areas, as depicted in Figure 3a. This model, which processes images through convolutional networks, excels at detecting and outlining irregular sections on road surfaces. For the analysis of acceleration data, an LSTM model is employed. The data first undergo a pose space transformation to align with a reference frame perpendicular to the road surface, effectively mitigating disturbances from changes in vehicle posture. The transformed acceleration data (as depicted in Figure 3b) are then input into an LSTM model equipped with 100 hidden units. This setup allows for the capturing of temporal dynamics and corresponding BFRS features, thus enhancing the accuracy of BFRS detection.

3.3. Interaction and Visualization

The BFRS detection process within the MSF4RS is implemented as a Web Processing Service (WPS) which is accessible and manipulable via various web requests. The configuration of detection parameters and data retrieval is facilitated through GET and POST requests. Through these requests, parameters essential for training the YOLO or LSTM models, such as the execution environment, initial learning rate, maximum number of epochs, and minimum batch size, can be accessed and modified. This interactive parameter configuration is executed through the RESTful APIs, operable directly within a web browser. Adjustments made to the settings are transmitted to the data server for execution and corresponding APIs are invoked to initiate the BFRS detection service. For example, users can specify parameters like the model, sensor type, and dataset in the URL to activate the detection service via the RESTful APIs. Furthermore, BFRS are visualized through the web interface. Leaflet.js is employed to manage various layers, including base maps, detection results, regions of interest (ROIs), and annotations. These layers are interactively controllable and can be visualized based on selected filter values, offering dynamic visualization of road surface information, as illustrated in Figure 4.
The experiments demonstrate that the MSF4RS harnesses multi-modal sensors to gather road surface information, demonstrating robust performance in data acquisition using the mobile platform. Data are transmitted to the server via the MQTT protocol, enabling efficient BFRS detection. Beyond mere data collection and transmission, the framework supports the user-driven configuration of parameters and data visualization through a dedicated web browser interface. This functionality substantially improves the operational convenience and enhances the usability associated with the framework.

4. Conclusions

The collection and analysis of road information are pivotal in the development of smart cities and intelligent transportation systems. This paper introduces the MSF4RS, a framework designed for the comprehensive acquisition and analysis of road surface information. The framework integrates GPS, IMU, camera, and Wi-Fi modules to ensure thorough data collection and transmission. Utilizing models such as YOLO v5 and LSTM for data analysis enables precise identification of BFRS. Furthermore, the MSF4RS employs a robust server and an intuitive web interface, enhancing user engagement through RESTful APIs with convenient data management, parameter configuration, and visualization functionalities. Ultimately, the MSF4RS presents a novel approach and methodology for the automated and intelligent perception of road information and is poised to significantly enhance road management and promote a safer, more convenient transportation system.

Author Contributions

H.L.: performed the theory analysis and methodology and contributed to drafting the manuscript. Y.H.: analyzed the data, design, and coding. J.H.: collected the data and conducted the statistics. W.L.: performed the literature reviews, provided the background knowledge, and improved the writing. T.W., H.Z. and W.M.: conducted the data visualization and related coding. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China grant number 42101466.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kassas, Z.Z.M.; Maaref, M.; Morales, J.J.; Khalife, J.J.; Shamei, K. Robust vehicular localization and map matching in urban environments through IMU, GNSS, and cellular signals. IEEE Intell. Transp. Syst. Mag. 2020, 12, 36–52. [Google Scholar] [CrossRef]
  2. Azhar, K.; Murtaza, F.; Yousaf, M.H.; Habib, H.A. Computer vision based detection and localization of potholes in asphalt pavement images. In Proceedings of the 2016 IEEE Canadian Conference on Electrical and Computer Engineering (CCECE), Vancouver, BC, Canada, 15–18 May 2016; IEEE: New York, NY, USA, 2016; pp. 1–5. [Google Scholar]
  3. Kulambayev, B.; Beissenova, G.; Katayev, N.; Abduraimova, B.; Zhaidakbayeva, L.; Sarbassova, A.; Akhmetova, O.; Kozbakova, A.; Yessenbayeva, A. A Deep Learning-Based Approach for Road Surface Damage Detection. Comput. Mater. Contin. 2022, 73, 2. [Google Scholar] [CrossRef]
  4. Ren, M.; Zhang, X.; Chen, X.; Zhou, B.; Feng, Z. YOLOv5s-M: A deep learning network model for road pavement damage detection from urban street-view imagery. Int. J. Appl. Earth Obs. Geoinf. 2023, 120, 103335. [Google Scholar] [CrossRef]
  5. Shtayat, A.; Moridpour, S.; Best, B.; Shroff, A.; Raol, D. A review of monitoring systems of pavement condition in paved and unpaved roads. J. Traffic Transp. Eng. 2020, 7, 629–638. [Google Scholar] [CrossRef]
  6. Kim, T.; Ryu, S.-K. Review and analysis of pothole detection methods. J. Emerg. Trends Comput. Inf. Sci. 2014, 5, 603–608. [Google Scholar]
  7. Kashinath, S.A.; Mostafa, S.A.; Mustapha, A.; Mahdin, H.; Lim, D.; Mahmoud, M.A.; Mohammed, M.A.; Al-Rimy, B.A.S.; Md Fudzee, M.F.; Yang, T.J. Review of data fusion methods for real-time and multi-sensor traffic flow analysis. IEEE Access 2021, 9, 51258–51276. [Google Scholar] [CrossRef]
  8. Lu, K. Advances in deep learning methods for pavement surface crack detection and identification with visible light visual images. arXiv 2020, arXiv:2012.14704. [Google Scholar]
  9. Dong, H.; Song, K.; Wang, Q.; Yan, Y.; Jiang, P. Deep metric learning-based for multi-target few-shot pavement distress classification. IEEE Trans. Ind. Inform. 2021, 18, 1801–1810. [Google Scholar] [CrossRef]
  10. Guo, W. Intelligent detection device of pavement disease based on image recognition technology. J. Phys. Conf. Ser. 2021, 1884, 012032. [Google Scholar] [CrossRef]
  11. Li, L.; Sun, L.; Ning, G.; Tan, S. Automatic pavement crack recognition based on BP neural network. PROMET-Traffic Transp. 2014, 26, 11–22. [Google Scholar] [CrossRef]
  12. Robet, R.; Hasibuan, Z.A.; Soeleman, M.A.; Purwanto, P.; Andono, P.N.; Pujiono, P. Deep Learning Model In Road Surface Condition Monitoring. In Proceedings of the 2022 International Seminar on Application for Technology of Information and Communication (iSemantic), Semarang, Indonesia, 17–18 September 2022; IEEE: New York, NY, USA, 2022; pp. 204–209. [Google Scholar]
  13. Jo, Y.; Ryu, S. Pothole detection system using a black-box camera. Sensors 2015, 15, 29316–29331. [Google Scholar] [CrossRef] [PubMed]
  14. Siegemund, J.; Franke, U.; Förstner, W. A temporal filter approach for detection and reconstruction of curbs and road surfaces based on conditional random fields. In Proceedings of the 2011 IEEE Intelligent Vehicles Symposium (IV), Baden-Baden, Germany, 5–9 June 2011; IEEE: New York, NY, USA, 2011; pp. 637–642. [Google Scholar]
  15. Bhatt, A.; Bharadwaj, S.; Sharma, V.B.; Dubey, R.; Biswas, S. An Overview of Road Health Monitoring System for Rigid Pavement By Terrestrial Laser Scanner. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, 43, 173–180. [Google Scholar] [CrossRef]
  16. Sharma, S.K.; Phan, H.; Lee, J. An application study on road surface monitoring using DTW based image processing and ultrasonic sensors. Appl. Sci. 2020, 10, 4490. [Google Scholar] [CrossRef]
  17. Mirtabar, Z.; Golroo, A.; Mahmoudzadeh, A.; Barazandeh, F. Development of a crowdsourcing-based system for computing the international roughness index. Int. J. Pavement Eng. 2022, 23, 489–498. [Google Scholar] [CrossRef]
  18. Laubis, K.; Konstantinov, M.; Simko, V.; Gröschel, A.; Weinhardt, C. Enabling crowdsensing-based road condition monitoring service by intermediary. Electron. Mark. 2019, 29, 125–140. [Google Scholar] [CrossRef]
  19. Ng, J.R.; Wong, J.S.; Goh, V.T.; Yap, W.J.; Yap, T.T.V.; Ng, H. Identification of road surface conditions using IoT sensors and machine learning. In Proceedings of the Computational Science and Technology: 5th ICCST 2018, Kota Kinabalu, Malaysia, 29–30 August 2018; Springer: Singapore, 2019; pp. 259–268. [Google Scholar]
  20. Du, R.; Qiu, G.; Gao, K.; Hu, L.; Liu, L. Abnormal road surface recognition based on smartphone acceleration sensor. Sensors 2020, 20, 451. [Google Scholar] [CrossRef] [PubMed]
  21. Chen, K.; Tan, G.; Lu, M.; Wu, J. CRSM: A practical crowdsourcing-based road surface monitoring system. Wirel. Netw. 2016, 22, 765–779. [Google Scholar] [CrossRef]
Figure 1. The overall design of the framework.
Figure 1. The overall design of the framework.
Proceedings 110 00021 g001
Figure 2. Sensor configuration in the vehicle: (a) Jetson Nano with Wi-Fi module; (b) GPS; (c) IMU; (d) camera.
Figure 2. Sensor configuration in the vehicle: (a) Jetson Nano with Wi-Fi module; (b) GPS; (c) IMU; (d) camera.
Proceedings 110 00021 g002
Figure 3. Data processing for different sensors: (a) detection result of camera; (b) spatially transformed acceleration.
Figure 3. Data processing for different sensors: (a) detection result of camera; (b) spatially transformed acceleration.
Proceedings 110 00021 g003
Figure 4. Interaction and visualization of BFRS detection results: (a) interactive layer controls; (b) dynamic visualization based on selected filter values (labeled by the blue bubble with white circle). The non-English annotations in the base map are automatically generated based on the geographic location by the map data provider, and it doesn’t affect the interaction and visualization of the Web page.
Figure 4. Interaction and visualization of BFRS detection results: (a) interactive layer controls; (b) dynamic visualization based on selected filter values (labeled by the blue bubble with white circle). The non-English annotations in the base map are automatically generated based on the geographic location by the map data provider, and it doesn’t affect the interaction and visualization of the Web page.
Proceedings 110 00021 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lyu, H.; Huang, Y.; Hua, J.; Li, W.; Wu, T.; Zhang, H.; Ma, W. The Design of a Mobile Sensing Framework for Road Surfaces Based on Multi-Modal Sensors. Proceedings 2024, 110, 21. https://doi.org/10.3390/proceedings2024110021

AMA Style

Lyu H, Huang Y, Hua J, Li W, Wu T, Zhang H, Ma W. The Design of a Mobile Sensing Framework for Road Surfaces Based on Multi-Modal Sensors. Proceedings. 2024; 110(1):21. https://doi.org/10.3390/proceedings2024110021

Chicago/Turabian Style

Lyu, Haiyang, Yu Huang, Jianchun Hua, Wenmei Li, Tianju Wu, Hanru Zhang, and Wangta Ma. 2024. "The Design of a Mobile Sensing Framework for Road Surfaces Based on Multi-Modal Sensors" Proceedings 110, no. 1: 21. https://doi.org/10.3390/proceedings2024110021

APA Style

Lyu, H., Huang, Y., Hua, J., Li, W., Wu, T., Zhang, H., & Ma, W. (2024). The Design of a Mobile Sensing Framework for Road Surfaces Based on Multi-Modal Sensors. Proceedings, 110(1), 21. https://doi.org/10.3390/proceedings2024110021

Article Metrics

Back to TopTop