Next Article in Journal
Stratified Multisource Optical Coherence Tomography Integration and Cross-Pathology Validation Framework for Automated Retinal Diagnostics
Previous Article in Journal
Hierarchical Graph Learning with Cross-Layer Information Propagation for Next Point of Interest Recommendation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Three-Dimensional Real-Scene-Enhanced GNSS/Intelligent Vision Surface Deformation Monitoring System

1
Digital Fujian Institute of Big Data for Natural Disaster Monitoring, Xiamen University of Technology, Xiamen 361024, China
2
School of Environmental Science and Engineering, Xiamen University of Technology, Xiamen 361024, China
3
Hunan Key Laboratory of Remote Sensing Monitoring of Ecological Environment in Dongting Lake Area, Changsha 410004, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(9), 4983; https://doi.org/10.3390/app15094983
Submission received: 26 March 2025 / Revised: 24 April 2025 / Accepted: 26 April 2025 / Published: 30 April 2025

Abstract

:
With the acceleration of urbanization, surface deformation monitoring has become crucial. Existing monitoring systems face several challenges, such as data singularity, the poor nighttime monitoring quality of video surveillance, and fragmented visual data. To address these issues, this paper presents a 3D real-scene (3DRS)-enhanced GNSS/intelligent vision surface deformation monitoring system. The system integrates GNSS monitoring terminals and multi-source meteorological sensors to accurately capture minute displacements at monitoring points and multi-source Internet of Things (IoT) data, which are then automatically stored in MySQL databases. To enhance the functionality of the system, the visual sensor data are fused with 3D models through streaming media technology, enabling 3D real-scene augmented reality to support dynamic deformation monitoring and visual analysis. WebSocket-based remote lighting control is implemented to enhance the quality of video data at night. The spatiotemporal fusion of UAV aerial data with 3D models is achieved through Blender image-based rendering, while edge detection is employed to extract crack parameters from intelligent inspection vehicle data. The 3DRS model is constructed through UAV oblique photography, 3D laser scanning, and the combined use of SVSGeoModeler and SketchUp. A visualization platform for surface deformation monitoring is built on the 3DRS foundation, adopting an “edge collection–cloud fusion–terminal interaction” approach. This platform dynamically superimposes GNSS and multi-source IoT monitoring data onto the 3D spatial base, enabling spatiotemporal correlation analysis of millimeter-level displacements and early risk warning.

1. Introduction

Land subsidence, the gradual sinking or settling of the ground, poses significant risks to infrastructure and public safety. According to the United Nations Educational, Scientific and Cultural Organization (UNESCO), the global area affected by land subsidence is projected to increase by 8% by 2040, with the most severely impacted regions concentrated in Asia. Monitoring surface deformation is therefore particularly important for identifying potential hazards and implementing effective preventive measures [1,2,3].
Conventional deformation monitoring methods, such as precision leveling and total station measurement, face limitations in precision, susceptibility to weather conditions, and restricted measuring range. Global Navigation Satellite System (GNSS) provides millimeter-level [4,5] all-weather real-time monitoring and exhibits high stability and reliability. However, its positioning performance in complex scenarios is limited by two issues: meteorologically dependent propagation delays and environmentally sensitive signal aberrations [6,7]. With the advancement in Internet of Things (IoT) technology, its integration with GNSS has emerged as a new way of monitoring various deformations. A GNSS deformation monitoring system, combining 4G data transmission technology and automated data processing, was applied to the South-to-North Water Diversion Project [8]. It enhanced long-term monitoring efficiency with high-frequency data feedback and cloud processing, but it relies on stable communication networks, risking data interruptions in remote areas. Although effective for bridge health diagnosis with high-precision lasers and real-time IoT feedback, the high sensor deployment costs limit its widespread use. Shi [9] implemented deformation monitoring and hazard warning for bridges on a high-speed roadway using a laser displacement sensor and IoT technology. Target recognition accuracy decreases significantly in low light, high dust concentration [10], or rainy and foggy weather. Hudda’s [11] lightweight traffic detection system also faces challenges in its performance stability at night or during bad weather. Ma, J. [12] integrated the Global Positioning System (GPS) and the BeiDou Navigation Satellite System (BDS) to conduct deformation observations in long-span railway bridges by fusing multi-source heterogeneous data. This dual-system approach enhances monitoring reliability through redundancy. However, the calibration mechanism for solving data conflicts among multiple systems has not been fully discussed, which may affect its applicability in complex environments.
This study integrates meteorological, visual, and IoT information and comprehensively analyzes and corrects meteorological interference based on multi-source data, thereby improving monitoring accuracy and reliability. In addition, it combines visual information to present the real-time deformation condition, expanding the monitoring dimensions and coverage, while enabling real-time dynamic monitoring and early warning. The system improves monitoring reliability in areas with weak GNSS signal coverage [13,14,15].
We combine IoT multi-source data fusion and 3D visualization to establish a framework for deformation analysis. With stereoscopic, realistic, and materialized features [16], this framework enables the seamless integration of geospatial data, internal and external building structure details, topography, and related attribute information [17]. By regularly updating 3D data, deformation trends can be identified across various time scales. Furthermore, the component-level 3D real-scene (3DRS) technology can be effectively applied in location services, virtual reality, and intelligent monitoring [18].
This study constructs an integrated surface deformation monitoring platform, combining GNSS, IoT-based environmental sensors, 3DRS technology, and intelligent vision technologies to deliver high-precision, multi-dimensional observations. This platform addresses the limitations in existing monitoring systems, including single-source data, poor monitoring quality at night, and fragmented heterogeneous images [19,20,21]. The platform includes a 3DRS sub-platform, the integration of GNSS and video monitoring data, and a nighttime supplemental lighting system. The platform enables the accurate identification of deformation areas, including surface cracks, depressions, and elevation changes. By fusing GNSS monitoring data, intelligent vision observation, and IoT data, this platform facilitates the multi-dimensional analysis of surface deformation causes.

2. Deformation Monitoring System Architecture

The system adopts a three-layer architecture of “edge collection–cloud fusion–terminal interaction” (Figure 1) to facilitate the collection, transmission, and visualization of GNSS data, meteorological data, and video data [22,23].
The edge collection layer performs multimodal data acquisition. The GNSS terminal outputs latitude, longitude, elevation, and UTC time by 4G wireless transmission. The visual sensor pushes an H.264-encoded video stream through the RTMP protocol. The weather station reports temperature, humidity, wind speed, wind direction, rainfall, and barometric pressure through the MQTT protocol (JSON format, 5s interval). The UAV with a tilt camera system collects multi-view images and generates a high-precision 3D grid model with ContextCapture 10.20 software [24,25]. The Trimble SX10 imaging station acquires high-density point cloud data with millimeter-level accuracy, storing data in LAS format to facilitate integration with UAV photogrammetry models [26].
The cloud fusion layer relies on Tencent Cloud servers for data integration, classification, and storage. It receives RTMP protocol video streams from visual sensors through the lightweight streaming media server ZLMediaKit, which supports multiple concurrent transcoding and HTTP-FLV format distribution. Meanwhile, Nginx static resource servers are deployed to host 3D geographic information data in 3D Tiles format [27,28]. GNSS data are structured and stored with fields such as device ID, latitude, longitude, elevation, and UTC timestamp. Meteorological data are indexed in tables by dimensions such as sensor node ID, temperature, humidity, and wind speed. The real-time control of the power supply on/off is achieved through the WebSocket server and TCP server.
The terminal interaction layer integrates real-time video streaming, a 3D spatial model, sensing data, and IoT control functions. Using the CesiumJS 3D engine, it loads 3D Tiles from cloud-distributed tilt camera models, providing functionalities such as 3D scene browsing, calculation, spatial analysis, and dual-screen comparison. RTMP video streams (encapsulated as HTTP-FLV by ZLMediaKit) are dynamically superimposed on the corresponding position of the 3D scene through coordinate mapping, forming a dual-view fusion screen of “live view model + real-time monitoring” [29,30]. Users can trigger remote light control commands through interface buttons, driving relays to activate fill lights. Real-time GNSS data are displayed through charts, and meteorological information is displayed through charts or wind disks. The layer also includes warning alerts, report downloads, and historical data queries, forming a closed-loop “monitoring–warning–control–retrospection” management system.

3. Data Sources

The input data of this framework can be grouped into multi-source sensing data and 3D data.

3.1. Multi-Source Sensing Data

The multi-source sensing data encompass meteorological data (temperature, relative humidity, wind speed, wind direction, rainfall, and atmospheric pressure), deformation information from the GNSS terminal, and visual sensor data. The visual sensor data include camera monitoring video, monitoring video from unmanned aerial vehicles’ (UAVs’) regular cruising, and inspection video from intelligent trolleys. Meteorological sensors, a GNSS receiver, and cameras are integrated into the GNSS monitoring station to ensure comprehensive data collection. These data are transmitted in real-time with low-latency and high-volume data transmission using 4G wireless technology. Figure 2 shows the multi-sensor data sources of the deformation monitoring system.
The weather station uses a YL-QX86 meteorological instrument to monitor six environmental parameters: temperature, relative humidity, wind speed, wind direction, rainfall, and atmospheric pressure. Table 1 shows the measurement parameters of the meteorological elements.
The GNSS data were collected by the ComNav A300 Universal GNSS receiver, designed specifically for geological disaster monitoring applications. Featuring a low-power design, the device automatically adjusts its operational modes through built-in MEMS sensors and real-time positional changes of monitoring points. It utilizes 4G wireless communication and meets IP68 waterproof and dustproof standards, ensuring reliable functionality in extreme outdoor environments. Its positioning accuracy is defined as ±(8 + 1 × 10−6 × D) mm horizontally and ±(15 + 1 × 10−6 × D) mm vertically, where D represents the baseline length in kilometers.
The cameras used in this study include the Sony 3351/2.8 CMOS sensor and GuoKe G7205V300 processing chip, integrated with an SoC processor. This combination has very low and stable power consumption, making it suitable for long-term monitoring applications. The system supports the H.265 compression algorithm, ensuring efficient data transmission, and adheres to the standard ONVIF protocol and RTMP streaming, facilitating seamless integration with existing infrastructure. The maximum pixel is 500W and the maximum resolution is 5MP (2560 × 1920). Additionally, it supports both timed and event-triggered video storage.
Aerial video data were collected by a DJI Mavic 3E drone. Before tilt photography, the aerial photography area was surveyed, taking into account factors such as wind direction, and weather conditions. The aerial photography was conducted between 8:00 and 10:00 a.m. at a flight altitude of 200 m. The tilt data were collected following a “well-shaped” flight route. The drone’s aerial video has a resolution of 4K pixels with a frame rate of 30 frames per second (fps). The real-time images were transmitted to the ground station through a high-definition camera and wireless transmission system, enabling the capture of both overall and detailed features around deformation areas.
The inspection video was captured using a DJI RoboMasterS1 equipped with a high-definition camera (with a maximum image size of 1920 × 1080) and a specific gimbal that enables fast rotation for the 360-degree monitoring of both indoor and outdoor deformation areas. The videos were transmitted to a remote server for storage and analysis, supporting edge detection.

3.2. Three-Dimensional Data

The outdoor 3D data were acquired through tilt photography during regular UAV aerial surveys. In this study, a DJI M300 RTK UAV equipped with a Zenith P1 sensor was used to acquire the tilt photography data of the study area following a well-shaped flight route. The acquired data, including multi-view image data, feature spatial data, and texture details, were then used to create multi-temporal 3D models using ConTextCapture.
The indoor 3D model was constructed using point cloud data, such as those of the indoor space and monitoring equipment, acquired by the Trimble SX10 high-precision la-ser scanner [31]. After investigating the layout and structure of the sites with potential deformation, the laser point cloud data of 30 sites were collected. We selected the indoor mode and a near-field distance range < 10 m. The resolution was set to 11 MPts, with a vertical range of −60° to 90° and a horizontal range of 0° to 360°. The data were imported into Trimble Realworks for denoising and segmentation, and SketchUp was utilized to construct the indoor 3DRS model.

3.3. Sensor Maintenance Plan

To ensure long-term reliability and data accuracy, a structured maintenance plan is implemented for key components of the monitoring system, focusing on operational cycles and technical specifications:
The meteorological instrument (YL-QX86) undergoes daily self-checks to verify parameter consistency (e.g., temperature, humidity) and flag anomalies. Monthly, probes and connector cleaning mitigate debris interference. Quarterly, calibration against reference stations ensures measurement accuracy (e.g., temperature within ±0.5 °C).
For GNSS deformation monitoring, the ComNav A300 receiver weekly antenna inspections maintain a 15° clear sky view and secure mounting. Monthly signal analysis tracks satellite count (≥6) and PDOP (≤6) to identify signal degradation. Annual firmware updates optimize positioning algorithms, maintaining horizontal/vertical accuracy of ±(8 + 1 × 10−6 × D) mm and ±(15 + 1 × 10−6 × D) mm, respectively.
Visual cameras (Sony 3351) require biweekly lens cleaning to preserve image clarity. Monthly test chart calibration corrects color distortion. Quarterly power checks ensure stable low-power operation (≤15W during use) to avoid interruptions.
The DJI Mavic 3E UAV undergoes pre-flight checks for propeller integrity, gimbal stability, and signal transmission (4K, 30fps, latency ≤ 200 ms) and post-flight data backups to secure aerial imagery. Monthly battery maintenance replaces units with a capacity of <80%, and firmware updates ensure compatibility with the “well-shaped” tilt photography route.
The DJI Robo Master S1 also undergoes pre-flight camera/gimbal tests and post-flight data storage for remote analysis. Monthly inspections check gimbal smoothness, image quality, and battery (capacity < 80%). Periodic software (V1.1.1) updates enhance edge detection for deformation analysis.
A centralized log tracks all maintenance (dates, tasks, component status) activities for traceability. This plan ensures compliance with specifications, minimizes downtime, and enhances system reliability. By synchronizing maintenance with technical requirements, the plan addresses environmental challenges (e.g., dust, signal interference) and equipment wear, safeguarding accuracy and operational integrity in critical infrastructure monitoring.

4. Data Processing

Data processing includes video data processing, streaming media storage, remote light control, 3DRS base construction, GNSS, and meteorological data processing, as shown in Figure 3.

4.1. Video Data Processing

4.1.1. Augmented Reality

The system integrates the streaming framework ZLMediaKit [32], deployed on a cloud server with HTTP-FLV protocol support. This enables the low-latency transmission and cross-platform visualization of visual sensor data. Sensors stream FFmpeg-encoded (H.264/AAC) audio/video content via RTMP to the cloud while the web platform uses flv.js and MSE for FLV parsing and browser-based plugin-free playback. When a fixed-point visual sensor streams data to the server, it generates an FLV stream. The FLV stream URL is composed of the server’s IP address, followed by a specific path identifier related to the FLV stream, such as “/flv/[stream_name]”, where “[stream_name]” is set by the stream pusher to accurately differentiate between streams from different origins.
The system creates a Video2D object by accessing the stream address of the visual sensor, assigning geographic coordinates (latitude and longitude) and configuring style and container attributes for rendering. Using camera parameters (angle, heading, and pitch), the system adjusts viewing perspectives and distances.
In this study, a building at Xiamen University of Technology served as the experimental site. Figure 4 demonstrates the 3DRS augmented reality effect through the integration of live monitoring video and a 3D model of the deformed region.

4.1.2. Image-Based Rendering

The integration of 3DRS refinement models with UAV aerial video faces many challenges, such as scale matching and lighting consistency [33]. We address the problems by creating a virtual camera using Blender 4.0 software. Its positioning and motion trajectory were designed to follow a predefined path, ensuring close alignment with the actual camera’s movement.
First, the UAV’s cruise video is imported into the work area using Blender’s motion-tracking function. The video frame rate is adjusted to match the original recording, and scene frames are set to ensure the video length corresponds to the total number of frames. A pre-read strategy is used to load the video into memory and the motion mode is changed to “position + rotation + zoom”.
At least eight feature points are identified in dynamic scenes. A prominent building number is selected as the first tracking point and tracked forward using the motion tracking algorithm. If tracking fails in subsequent frames, the algorithm reverts to the previous frame, expands the search area, and resumes tracking. This is repeated until all eight tracking points are obtained.
Next, the UAV camera parameters are input into the software, and key frames A and B are set to correspond to the first and 369th frames of the video, respectively. Thus, the camera motion in 3D space can be solved by camera-solving techniques. Subsequently, the tracking scene is set up, applying motion trajectory to the 3D viewport. The 3D model (in .obj format) is imported, and motion tracking is enabled to visualize the eight feature points in 3D space. Three feature points are selected on the 3D model ground and aligned with the ground in the video, and the positional and rotational coordinates of the detailed 3D model are adjusted to a perfect fit with the road in the video. Figure 5 shows an example of image-based rendering processed by this method.
Finally, the output file format is set to FFmpeg, and transparency settings are configured. During rendering, all background layers except the 3D model are hidden. Select the rendering animation, complete the image-based rendering synthesis, and then obtain the final rendering output. The resulting rendered video integrates UAV emergency rescue and regular aerial footage with image-based rendering of 3D models, providing a more realistic visual effect [34]. This process enables long-distance deformation monitoring of hidden danger points and provides valuable decision-making assistance.

4.1.3. Edge Detection

This study employs edge detection for crack extraction [35,36]. Initially, the image undergoes grayscale conversion, contrast enhancement, and denoising processes. Subsequently, it is segmented through a dynamic thresholding algorithm, effectively separating the crack foreground from the background [37]. The largest connected region is then selected, and any breaks within it are repaired. Following that, merged bounding boxes are employed to calculate the aspect ratio, as illustrated in Figure 6. Finally, row-column projections are analyzed. Following is a detailed description of the process.
The base image, composed of red, green, and blue channels (Figure 6a), is converted to grayscale using an averaging method. Gray values are redistributed, and pixel traversal is performed to count the frequency of each gray level. For an image with L gray levels, the frequency of gray level r k in the original image is denoted as n k ,   k = 0,1 , 2 , , L 1 . The cumulative distribution function S k is calculated for each gray level.
S k = j = 0 k P r r j = j = 0 k n j n
where n is the total number of pixels in the image and P r ( r j ) is the probability of gray level r j .
The cumulative distribution function S k is multiplied by ( L 1 ) to obtain the histogram-equalized gray value S o u t = ( L 1 ) S k . Each pixel’s gray value in the original image is then replaced with its corresponding equalized gray value, resulting in a histogram-equalized image (Figure 6b).
When a 3 × 3 neighborhood is defined, median filtering is used to remove impulse noise, such as salt-and-pepper noise, from the image. The denoised image is then normalized, and its brightness and contrast are adjusted using gamma correction (Figure 6c). An iterative thresholding segmentation algorithm is subsequently employed to determine a suitable threshold. The iteration count depends on the characteristics of the crack image.
A suitable global threshold T 0 is selected as the initial value. Based on the threshold discrimination rule, the image is segmented into two subregions: G 1 , consisting of pixels with gray values greater than T 0 , and G 2 , consisting of pixels with a gray value less than T 0 . The average gray values of the pixels in G 1 and G 2 are calculated and denoted as M 1 and M 2 , respectively, and the average of M 1 and M 2 is the new threshold T 1 .
T 1 = 1 2 ( M 1 + M 2 )
If T 1 T 0 < 0.5 , the threshold T 1 is output as the final threshold. Otherwise, assign the value of T 1 to T 0 and repeat the above steps. Through iterative threshold adjustment, the algorithm achieves a more accurate segmentation of the image into foreground and background parts (Figure 6d).
After segmentation, smaller connected regions are filtered out by labeling, calculating areas, and sorting, leaving the required number of large connected regions. Then, row and column projections are performed to obtain pixel sums. The average gray value at the center-of-mass position of each connected region is compared with the average gray value of the whole region. If the difference is less than a predefined threshold, the pixel values of the region in the binary image are set to 0.
After identifying the boundaries between neighboring connected regions, the image undergoes a crack splice-and-fill operation (Figure 6e). The areas of the connected regions are sorted in descending order, and the two largest bounding boxes are identified. A new bounding box is created by merging the two, with its upper-left corner coordinates set to the minimum values of the original boxes’ coordinates. The width and height are the sum of the original boxes’ dimensions. Cracks are then labeled (Figure 6f).
Finally, the width-to-height ratio of the new bounding box is calculated. A ratio greater than 1 indicates a transverse crack, while a ratio less than or equal to 1 indicates a longitudinal crack.
By analyzing row and column projections, the distribution of targets in the image can be quickly determined along the horizontal and vertical directions, as shown in the row projection plot (Figure 7a) and the column projection plot (Figure 7b).

4.2. GNSS and Meteorological Data Processing

4.2.1. GNSS Data Processing

A database table (Table 2) was established for storing the GNSS monitoring data. The reference station receives satellite signals and calculates the positions, which are compared with the signals received by the monitoring station to calculate residuals. After error correction, the monitoring accuracy reaches the millimeter level. The monitoring data format is set to RTCM 3.2, the data acquisition interval is 10 Hz, the baseline solution time period length is 30 min, the smoothing duration is 1h, the cutoff altitude angle of the participating baseline solution satellites is 20°, the resulting time format is set to H:M:S.SS, and the folder storage time frequency is set to 24 h.
The GNSS receiver used in this study achieves a horizontal accuracy of σ = ± ( 8 + 1 × 10 6 × D ) mm and a vertical accuracy of σ = ± ( 15 + 1 × 10 6 × D ) mm, where D denotes the baseline distance. Verification was conducted with baselines of D ≤ 5 km. The number of effective GNSS satellites in a single system was ≥8, and the satellite elevation cutoff angle was set to ≤10°. The GNSS receiver conducted observations at points with known coordinates through 10 independent sessions. Each session collected 100 RTK measurements. To mitigate correlation errors, the receiver re-initialized by restarting between sessions. The RTK measurement accuracy was calculated using Equations (3) and (4), and it should be better than σ.
m h k = 1 n i = 1 n N i N 0 2 + E i E 0 2
m v k = 1 n i = 1 n U i U 0 2
where m h k and m v k represent the horizontal and vertical accuracy of RTK measurements, respectively. N 0 ,   E 0 , and U 0 represent the north, east, and height coordinates of the known point in the local topocentric coordinate system, respectively. N i ,   E i , and U i represent the north, east, and height coordinates of the i -th positioning result of the tested device in the local topocentric coordinate system, respectively. The unit of these eight parameters is mm. i represents the index of the RTK measurement result; n represents the number of RTK measurement results.
Finally, when D = 4.5   k m , the measurements showed that m h k = 7.45   m m and m v k = 13.28   m m , which met the engineering accuracy requirements.

4.2.2. Meteorological Data Processing

First, configure the IP address and port number of the MQTT Broker for the weather instrument and specify the publication topic as “/sensor/YL8210M200004”. On the subscription side, the data collection and storage system uses the MQTT protocol to connect to the same broker address with authenticated credentials. It receives the weather instrument’s JSON-formatted data (including SN code, temperature and humidity, pressure, wind speed, PM2.5, and other parameters). The system automatically timestamps each data entry (format: YYYY-MM-DD HH:MM:SS) using the gettime() function, then parses the receiving data time with JSON data and stores it in a MySQL database table. The meteorological database table is shown in Table 3.

4.2.3. Local Data Buffering Strategy for Edge Device

To address network instability in field environments, the edge device layer employs a lightweight local data buffering mechanism for GNSS receivers and weather instruments. The solution maintains data continuity during network outages or transmission delays in complex field environments.
Specifically, during network anomalies, devices temporarily store real-time monitoring data (GNSS coordinates and meteorological parameters) via built-in SD cards, appending a timestamp and device identifier to each data unit to ensure temporal order and traceability. The buffering strategy combines a capacity threshold with a First-In-First-Out mechanism: when the buffer storage reaches a limit of 500 entries, the oldest data are automatically discarded to free up space and prevent buffer overflow.
The network status monitoring module determines network availability in real time by periodically probing server heartbeat packets. Once three consecutive heartbeat responses are detected, the system triggers a batch upload of buffered data. Data are packaged in chronological order for reliable transmission via the MQTT protocol at the QoS 1 level. Upon receiving a server confirmation response, uploaded data are deleted from the buffer to avoid redundant transmission and storage. This mechanism effectively ensures data integrity in weak network environments, reduces device dependence on real-time connectivity, and balances low power consumption and storage efficiency requirements for embedded devices while enhancing system robustness.

4.3. Three-Dimensional Model Construction

The tilt photography data are pre-processed through quality checks, the removal of unqualified photos, and the extraction of images of areas of concern. The preprocessed data are then imported into a computer, where a tilt model is generated using the ContextCapture 10.20 software. SVSGeoModeler is then used to monomerize the model’s subsidiary structures and complete the mapping, resulting in a highly detailed large-scene 3DRS model. Finally, the model is converted into the 3DTiles format, providing a reliable data foundation for the platform (Figure 8).
Component-level 3DRS modeling is established based on indoor and multi-source sensor point cloud data with the Trimble SX10 3D scanner. The indoor 3D point cloud data are processed using Trimble Realworks and SketchUp. Trimble Realworks is used to denoise and align the point cloud, ensuring an alignment error of ≤2 mm and a confidence level of ≥50%. Noise from pedestrians and debris is segmented and removed, retaining only the main structure of the monitoring station. The optimized point cloud is then imported into SketchUp to create a 1:1 scale component-level 3DRS model (Figure 9). The large-scale and component-level models are integrated using SVS-GeoModeler, and the resulting 3D data are optimized and sliced using Cesium. Finally, the sliced data are exported to the server for management and service deployment.
To enable fast 3D scene loading on the web, tile division is improved by reconstructing the top layer and loading only the tiles within the current field of view, significantly reducing the number of requests. The Draco vertex compression algorithm is used to reduce vertex data storage and the ASTC algorithm is used for texture compression. Block-based compression ensures image quality while improving the compression ratio, enabling quick and efficient web-based model loading. Figure 10 shows the 3D model fusion results.

5. Deformation Monitoring Visualization Platform

The platform integrates multi-source monitoring data with a 3D model, using Vue.js and the Cesium framework to develop the user interface and functionalities. System deployment is achieved using SpringBoot and Tomcat, with MySQL serving as the database. The platform features five major functional modules: map base management, spatial analysis tools, multi-source monitoring, intelligent vision, and site selection. These modules enable the remote control of multiple monitoring devices and the unified management of multiple monitoring data. This design ensures effective control and seamless integration of the monitoring system, providing robust support for safety monitoring, early warning, auxiliary analysis, and emergency response operations (Figure 11).

5.1. Map Base Management Module

The map base management module handles the presentation of a refined 3DRS model, remote sensing image selection, and base map switching, along with spatial calculation and analysis. It enables operations such as switching, roaming, and double-screen comparison between multi-temporal remote sensing images and various base maps. The refined 3DRS model demonstrates the morphological characteristics of the hazard point, including its overall distribution, contour, and local texture. Figure 12 is a screenshot of the visualization system, showing a 3D model of a culvert gate.
The module facilitates spatial and temporal comparative analysis of the 3DRS model across different time nodes. By integrating digital twin and IoT technologies, it provides managers with precise insights into the dynamic changes, deformation risks, and safety status of the monitoring object. This supports comprehensive risk assessment and informed response throughout the entire monitoring process [38,39].

5.2. Multi-Source Monitoring Module

The multi-source monitoring module continuously accesses and iteratively updates real-time data from the database. Its functionalities include displaying GNSS data, presenting six meteorological elements, threshold-based warnings, data querying, and report downloading. Figure 13 shows the interface of this module.
GNSS data are visualized with real-time timestamps on the X-axis and displacement changes on the Y-axis, displaying the morphology variables through curved line graphs (Figure 13a). The module supports querying historical data, calculating the mean and standard deviation of historical data, and setting the warning threshold as the mean ± 2 times the standard deviation. Data exceeding the threshold are recorded and alarm notifications are sent to the manager. It automatically generates timestamped GNSS coordinates when an alarm is triggered.
Meteorological data, updated every 5 min, are displayed as line graphs. At the same time, rainfall greater than 30 mm/h automatically triggers a Level 3 alert and superimposes the alert area on the 3DRS model. Managers can download daily and monthly reports (Figure 14) and combine platform GNSS data, meteorological data, 3DRS models, and alarm information to build a unified, continuously updated lifeline big data repository.

5.3. Intelligent Vision Module

The intelligent vision module encompasses several key functionalities and Figure 14 shows the interface of this module, including video monitoring (Figure 15b), remote lighting control, enhanced virtual environments (Figure 15a), and crack extraction (Figure 15c). To address the lack of dynamic updates in the 3D model, the module integrates real-time video data to restore the real scene. High-resolution vision sensors can accurately capture real-time images of deformation points and their surrounding areas, enabling managers to track local deformation processes and correlate them with GNSS data and meteorological data for analysis. This integration reveals interaction relationships and provides insights into equipment conditions. In cases of equipment damage, the module helps determine whether the cause is natural or human-induced.
By visualizing alarm data, the intelligent vision module records specific physical processes and actual scenes, providing managers with a richer information dimension to understand the underlying causes of data fluctuation. The nighttime light control and vision sensor enable 24 h remote real-time monitoring of the target area (Figure 15b). Additionally, edge detection is applied to monitoring images to extract crack parameter information (Figure 15c).
To improve video surveillance quality at night, this study developed a dual-protocol illumination control framework leveraging WebSocket and TCP communication. A Web-Socket server establishes persistent full-duplex connections with web clients, enabling the real-time bidirectional transmission of lighting commands and status feedback. Concurrently, a TCP server interfaces with switching registers through dedicated TCP clients, ensuring the reliable delivery of control signals. The protocol stack implements hexadecimal command conversion (e.g., 0X11→17) through buffer-based data packaging, where Boolean control inputs (1/0) are translated into device-specific byte streams. Hardware actuation is achieved via electromechanical relays in switching registers, where coil energization (command = 1) closes circuit contacts to activate supplemental lighting, while de-energization (command = 0) terminates the power supply. System configuration involves predefined network parameters (IP/port) for switching register connectivity. Figure 16 shows the flowchart of the remote lighting control.

6. System Testing and Applications

6.1. System Latency Testing

To evaluate the end-to-end delay from sensor data acquisition to monitoring platform reception, multi-dimensional measurement methods combined with real-time data capture techniques are employed. First, the timestamp synchronization method records the sensor acquisition time T 1 and platform reception time T 2 , with single-trial delay defined as T = T 2 T 1 . Statistical characteristics are calculated through 100 consecutive samples. Supplementary ICMP round-trip time (RTT) testing analyzes 4G network transmission delay, where one-way delay is taken as half of the RTT. A pressure testing method is also used to simulate 3× normal data traffic and verify delay stability under high load.
The experimental results show that the system achieves an average delay of 215 ms under no-load conditions, meeting the real-time requirement (<500 ms) for highway slope monitoring and early warning. Although delay significantly increases under load scenarios, it remains within the early warning threshold. The proposed methods provide a quantitative basis for evaluating and optimizing the system’s real-time performance. The specific measurement dimensions, methods, instruments, and delay results are shown in Table 4.

6.2. System Application

The system was applied to the monitoring of a highway slope in Fujian Province, China. The monitoring system was deployed along a 30 km mountainous section of a two-lane expressway. This area is characterized by average slope gradients of 28–45° and bedrock outcrops accounting for 60% of the monitored slopes. The region experiences an annual average rainfall of 1600–2200 mm. Such environmental conditions are favorable for the occurrence of landslides. As this section has a daily traffic volume of 5000–8000 vehicles, deformation monitoring is necessary.
Six monitoring stations were deployed (Figure 17), each integrating GNSS (Global Navigation Satellite System) displacement sensors, weather sensors, and high-definition cameras. The deployment of the slope monitoring system requires avoiding electromagnetic interference and ensuring a clear line of sight for GNSS antennas. Sensors are fixed using concrete foundations or grouted boreholes; cables are shielded with armored conduits and waterproof seals; equipment housings meet IP66 standards; and corrosion-resistant materials are selected for coastal areas. The system utilizes 4G for data transmission and solar-powered lithium batteries for power supply. Core sensors meet stringent precision requirements (GNSS horizontal accuracy ≤ 1 cm, inclinometer resolution 0.001°). A monitoring database and system software are established to realize slope monitoring, environmental monitoring, and video surveillance, thereby enhancing the safety protection level of monitored objects.
Until now, this section shows that the monitoring system operates stably, with monthly displacement velocities in the monitored area ranging from −1.0 to 2.1 mm/month. The results indicate the stability of the monitored region. Specifically, monitoring stations 1–6 recorded average deformation rates of 0.8 mm/month, 1.3 mm/month, 2.1 mm/month, −0.8 mm/month, −0.5 mm/month, and −1.0 mm/month, respectively.

7. Conclusions

This study proposes a multi-source IoT and intelligent vision-integrated deformation monitoring system to address limitations in conventional methods. The system combines GNSS terminals, environmental sensors, and visual sensors for real-time data acquisition via the MQTT protocol. Point cloud data acquired from UAV tilt photography and 3D laser scanning are processed using ContextCapture and SVSGeoModeler to generate large-scale and component-level 3D models. These models undergo vertex compression with CesiumLab and lightweight optimization based on ASTC for efficient storage and rendering. A Zlmediakit streaming server, combined with 3D fusion and nighttime fill lighting, enhances virtual–real integration. Blender-simulated cameras and edge detection enable real-scene synthesis and crack extraction. By fusing 3DRS models with GNSS positioning, meteorological monitoring, and vision-based dynamic tracking, the system achieves real-time digital twin interoperability, significantly improving observational capacity compared to traditional approaches.
Compared with traditional single-source data-based deformation monitoring methods that rely solely on GNSS, our proposed system’s multi-source data integration approach offers a more comprehensive and accurate assessment of deformation. Single-source methods often fail to capture the complex environmental factors and detailed object characteristics that can influence deformation. For example, they may overlook the impact of sudden weather changes, like strong winds or heavy rainfall, on the monitored objects. In contrast, our system, by incorporating meteorological sensors, can take these factors into consideration, providing a more reliable assessment of deformation. Furthermore, unlike vision-based methods that only focus on 2D image analysis for crack detection, our system uses 3D models and edge detection in a virtual environment for more accurate identification of cracks, especially in complex scenes. The 3D model provides a more realistic spatial representation, enabling better differentiation between actual cracks and false positives caused by shadows or texture variations in 2D images.
The system achieves comprehensive monitoring and early warning of surface deformation by integrating all diverse sensor data, providing precise data and robust analysis paths for surface deformation assessment. This approach supports efficient decision-making and rapid response by managers, serving as a critical component in the development of smart cities and the enhancement of urban security resilience.
Future work will integrate additional sensors, such as crack meters and displacement meters, with existing GNSS and meteorological sensors, to capture deformation information more comprehensively and accurately at different scales. By incorporating data from more sources, the system can construct a complete, multi-dimensional information map of surface deformation, ranging from small local displacements to large-scale overall deformations. This will provide a robust foundation for timely hazard detection and scientific safety assessments.

Author Contributions

Conceptualization, Y.H. and W.Y.; methodology, Y.H. and W.Y.; validation, Q.S.; formal analysis, W.Y. and Q.H.; resources, W.Y.; data curation, H.L., S.L., and W.Y.; writing—original draft preparation, S.Z., W.Y., and Q.S.; writing—review and editing, W.Y., H.L., S.L., and S.Z.; supervision, Y.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Fujian Province’s Foreign Cooperation Project (2023I0047), the Fujian Provincial Natural Science Foundation Guiding Project (2024Y0057), Fujian Provincial Undergraduate Education and Teaching Research Project (FBJY20240095), the Fujian Provincial Natural Science Foundation Project (2023J011432, 2024J011195), the Science and Technology Research Program of Chongqing Municipal Education Commission (Grant No. KJQN202203406), the Open Project Fund of Hunan Provincial Key Laboratory for Remote Sensing Monitoring of Ecological Environment in Dongting Lake Area (Project No: DTH Key Lab.2024-04), the Fujian Construction Science and Technology Research and Development Project (2025-K-66), and the Natural Science Foundation of Fujian Province (2022J011237).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors sincerely thank the anonymous reviewers from Xiamen University of Technology for their constructive suggestions, which vastly improved the quality of the original manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Brownjohn, J.; Pan, T.-C. Identifying Loading and Response Mechanisms from Ten Years of Performance Monitoring of a Tall Building. J. Perform. Constr. Facil. 2008, 22, 24–34. [Google Scholar] [CrossRef]
  2. Dixon, T.H.; Mao, A.; Bursik, M.; Heflin, M.; Langbein, J.; Stein, R.; Webb, F. Continuous Monitoring of Surface Deformation at Long Valley Caldera, California, with GPS. J. Geophys. Res. Solid Earth 1997, 102, 12017–12034. [Google Scholar] [CrossRef]
  3. Yan, Y.; Li, M.; Dai, L.; Guo, J.; Dai, H.; Tang, W. Construction of “Space-Sky-Ground” Integrated Collaborative Monitoring Framework for Surface Deformation in Mining Area. Remote Sens. 2022, 14, 840. [Google Scholar] [CrossRef]
  4. Hamza, V.; Stopar, B.; Sterle, O.; Pavlovčič-Prešeren, P. Observations and Positioning Quality of Low-Cost GNSS Receivers: A Review. GPS Solut. 2024, 28, 149. [Google Scholar] [CrossRef]
  5. Fredeluces, E.; Ozeki, T.; Kubo, N.; El-Mowafy, A. Modified RTK-GNSS for Challenging Environments. Sensors 2024, 24, 2712. [Google Scholar] [CrossRef] [PubMed]
  6. Nourmohammadi, H.; Keighobadi, J. Fuzzy Adaptive Integration Scheme for Low-Cost SINS/GPS Navigation System. Mech. Syst. Signal Process. 2018, 99, 434–449. [Google Scholar] [CrossRef]
  7. Lan, Z.; Wang, J.; Shen, Z.; Fang, Z. Highly Robust and Accurate Multi-Sensor Fusion Localization System for Complex and Challenging Scenarios. Measurement 2024, 235, 114851. [Google Scholar] [CrossRef]
  8. Xiao, R.; Shi, H.; He, X.; Li, Z.; Jia, D.; Yang, Z. Deformation Monitoring of Reservoir Dams Using GNSS: An Application to South-to-North Water Diversion Project, China. IEEE Access 2019, 7, 54981–54992. [Google Scholar] [CrossRef]
  9. Al-Ali, A.R.; Beheiry, S.; Alnabulsi, A.; Obaid, S.; Mansoor, N.; Odeh, N.; Mostafa, A. An IoT-Based Road Bridge Health Monitoring and Warning System. Sensors 2024, 24, 469. [Google Scholar] [CrossRef]
  10. Vasuhi, S.; Vaidehi, V. Target Detection and Tracking for Video Surveillance. WSEAS Trans. Signal Process. 2014, 10, 179–188. [Google Scholar]
  11. Hudda, S.; Barnwal, R.; Khurana, A.; Haribabu, K. A WSN and Vision Based Smart, Energy Efficient, Scalable, and Reliable Parking Surveillance System with Optical Verification at Edge for Resource Constrained IoT Devices. Internet Things 2024, 28, 101346. [Google Scholar] [CrossRef]
  12. Ma, J. BDS/GPS Deformation Analysis of a Long-Span Cable-Stayed Bridge Based on Colored Noise Filtering. Geod. Geodyn. 2023, 14, 163–171. [Google Scholar] [CrossRef]
  13. Nisha; Urvashi. A Systematic Literature Review of Internet of Video Things: Trends, Techniques, Datasets, and Framework. Internet Things 2023, 24, 100906. [Google Scholar] [CrossRef]
  14. Aguero, M.; Doyle, D.; Mascarenas, D.; Moreu, F. Visualization of Real-Time Displacement Time History Superimposed with Dynamic Experiments Using Wireless Smart Sensors and Augmented Reality. Earthq. Eng. Eng. Vib. 2023, 22, 573–588. [Google Scholar] [CrossRef]
  15. Song, Y.; Bi, J.; Wang, X. Design and Implementation of Intelligent Monitoring System for Agricultural Environment in IoT. Internet Things 2024, 25, 101029. [Google Scholar] [CrossRef]
  16. Chen, J.; Liu, J.-J.; Tian, H.-B. Basicdirectionsandtechnologicalpathforbuilding3DrealisticgeospatialsceneinChina. Geomat. Inf. Sci. Wuhan Univ. 2022, 47, 1568–1575. [Google Scholar]
  17. Shi, J.; Pan, Z.; Jiang, L.; Zhai, X. An Ontology-Based Methodology to Establish City Information Model of Digital Twin City by Merging BIM, GIS and IoT. Adv. Eng. Inform. 2023, 57, 102114. [Google Scholar] [CrossRef]
  18. Huang, M.; Zhang, Z.; Fang, D.; Yuan, L.; Zhang, W.; Li, W. Discussion on the Construction about Urban-Scale 3D Real Scene. In Proceedings of the 2024 7th International Conference on Computer Information Science and Application Technology (CISAT), Hangzhou, China, 12–14 July 2024; pp. 355–359. [Google Scholar]
  19. Elrifaee, M.; Zayed, T.; Ali, E.; Ali, A.H. IoT Contributions to the Safety of Construction Sites: A Comprehensive Review of Recent Advances, Limitations, and Suggestions for Future Directions. Internet Things 2024, 31, 101387. [Google Scholar] [CrossRef]
  20. De Donato, M.C.; Corradini, F.; Fornari, F.; Re, B. SAFE: An ICT Platform for Supporting Monitoring, Localization and Rescue Operations in Case of Earthquake. Internet Things 2024, 27, 101273. [Google Scholar] [CrossRef]
  21. Zahran, S.; Masiero, A.; Mostafa, M.M.; Moussa, A.M.; Vettore, A.; El-Sheimy, N. Uavs Enhanced Navigation in Outdoor Gnss Denied Environment Using Uwb and Monocular Camera Systems. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, 42, 665–672. [Google Scholar] [CrossRef]
  22. Zhou, B.-R.; Li, J.-N.; Zhao, W.-M.; Wang, T.; Zhao, B.; Zheng, W.-Y.; Huang, G.-L.; Ou, M.-Y. A Multi-Level and Multi-Agent Collaborative Control Platform for City-Scale Virtual Power Plants Based on a Cloud-Pipe-Edge-End Fusion Architecture. In Proceedings of the 2024 4th International Signal Processing, Communications and Engineering Management Conference (ISPCEM), Montreal, QC, Canada, 28–30 November 2024; pp. 784–789. [Google Scholar]
  23. Gu, H.; Zhao, L.; Han, Z.; Zheng, G.; Song, S. AI-Enhanced Cloud-Edge-Terminal Collaborative Network: Survey, Applications, and Future Directions. IEEE Commun. Surv. Tutor. 2023, 26, 1322–1385. [Google Scholar] [CrossRef]
  24. Hu, C.; Xie, F.; Zhou, X.; Cai, L.; Yang, X.; Wang, J.; Fan, Y. Accuracy Analysis of Substation 3D Model Based on Oblique Photography. In Proceedings of the 2023 4th International Symposium on Insulation and Discharge Computation for Power Equipment (IDCOMPU2023), Wuhan, China, 27–28 May 2023; Springer: Singapore, 2023; pp. 323–331. [Google Scholar]
  25. He, Y.-R.; Yang, Y.-J.; Xu, S.-S.; He, Y.-D. Construction of High Precision 3D Campus Real Scene Model Based on UAV. In Proceedings of the 2021 International Conference on Intelligent Computing, Automation and Systems (ICICAS), Chongqing, China, 29–31 December 2021; pp. 101–106. [Google Scholar]
  26. He, Y.-R.; Chen, P.; Ma, W.-W.; Chen, C.-C. Construction of 3D Model of Tunnel Based on 3D Laser and Tilt Photography. Sens. Mater. 2020, 32, 1743–1756. [Google Scholar] [CrossRef]
  27. Zhou, W.; Yang, J.; Shao, R.; Lyu, J. Visualization Method of BIM Model Based on WebGIS. In Proceedings of the Third International Conference on Advanced Algorithms and Signal Image Processing (AASIP 2023), Kuala Lumpur, Malaysia, 10 October 2023; Volume 12799, pp. 1372–1378. [Google Scholar]
  28. He, Y.; Li, C.; He, Y.; Yu, X.; He, F. A Realistic 3D-Based Information System for Key Populations with Essential Diseases. In Proceedings of the 2024 12th International Conference on Information Systems and Computing Technology (ISCTech), Xi’an, China, 8–11 November 2024; pp. 1–7. [Google Scholar]
  29. Chen, H.; Bian, J. Streaming Media Live Broadcast System Based on MSE. J. Phys. Conf. Ser. 2019, 1168, 032071. [Google Scholar] [CrossRef]
  30. Wang, H.; Liu, H. Live Classroom System Based on FFMPEG+ RTMP Technology. Int. Core J. Eng. 2023, 9, 88–95. [Google Scholar]
  31. Sanchez, V.; Zakhor, A. Planar 3D Modeling of Building Interiors from Point Cloud Data. In Proceedings of the 2012 19th IEEE International Conference on Image Processing, Orlando, FL, USA, 30 September–3 October 2012; pp. 1777–1780. [Google Scholar]
  32. Wang, W.; Zhao, R.; Mei, J.; Zheng, K. Design and Implementation of Campus Surveillance System Based on ZLMediaKit. In Proceedings of the 2023 IEEE International Conference on Advanced Learning Technologies (ICALT), Orem, UT, USA, 10–13 July 2023; pp. 356–358. [Google Scholar]
  33. Hu, D.; Minner, J. UAVs and 3D City Modeling to Aid Urban Planning and Historic Preservation: A Systematic Review. Remote Sens. 2023, 15, 5507. [Google Scholar] [CrossRef]
  34. Kumar, R.; Sawhney, H.; Samarasekera, S.; Hsu, S.; Tao, H.; Guo, Y.; Hanna, K.; Pope, A.; Wildes, R.; Hirvonen, D. Aerial Video Surveillance and Exploitation. Proc. IEEE 2001, 89, 1518–1539. [Google Scholar] [CrossRef]
  35. Nguyen, H.-N.; Kam, T.-Y.; Cheng, P.-Y. An Automatic Approach for Accurate Edge Detection of Concrete Crack Utilizing 2D Geometric Features of Crack. J. Signal Process. Syst. 2014, 77, 221–240. [Google Scholar] [CrossRef]
  36. Li, P.; Xia, H.; Zhou, B.; Yan, F.; Guo, R. A Method to Improve the Accuracy of Pavement Crack Identification by Combining a Semantic Segmentation and Edge Detection Model. Appl. Sci. 2022, 12, 4714. [Google Scholar] [CrossRef]
  37. Kim, B.-G.; Kim, D.-J.; Park, D.-J. Novel Precision Target Detection with Adaptive Thresholding for Dynamic Image Segmentation. Mach. Vis. Appl. 2001, 12, 259–270. [Google Scholar] [CrossRef]
  38. Tan, J.; Deng, F. Design and Key Technology of Urban Landscape 3d Visualization System. Procedia Environ. Sci. 2011, 10, 1238–1243. [Google Scholar] [CrossRef]
  39. Overbye, T.; Klump, R.; Weber, J. Interactive 3D Visualization of Power System Information. Electr. Power Compon. Syst. 2003, 31, 1205–1215. [Google Scholar] [CrossRef]
Figure 1. Architecture of the proposed deformation monitoring system. The edge collection layer consists of the GNSS receiver, meteorological receiver, visual sensor, drones, RoboMaster S1, DJI Airport, and 3D Laser Scanning. Data are transmitted, processed, and stored in this layer. The cloud fusion layer uses servers like Tencent Cloud and Nginx for data handling. The terminal interaction layer provides real-time data, live video, augmented reality, historical data, early warning, spatial analysis, report management, 3D-model management, edge detection, and remote control.
Figure 1. Architecture of the proposed deformation monitoring system. The edge collection layer consists of the GNSS receiver, meteorological receiver, visual sensor, drones, RoboMaster S1, DJI Airport, and 3D Laser Scanning. Data are transmitted, processed, and stored in this layer. The cloud fusion layer uses servers like Tencent Cloud and Nginx for data handling. The terminal interaction layer provides real-time data, live video, augmented reality, historical data, early warning, spatial analysis, report management, 3D-model management, edge detection, and remote control.
Applsci 15 04983 g001
Figure 2. Multi-sensing data sources for the proposed deformation monitoring system.
Figure 2. Multi-sensing data sources for the proposed deformation monitoring system.
Applsci 15 04983 g002
Figure 3. Flowchart of data processing.
Figure 3. Flowchart of data processing.
Applsci 15 04983 g003
Figure 4. An example of 3D real-scene (3DRS) augmented reality.
Figure 4. An example of 3D real-scene (3DRS) augmented reality.
Applsci 15 04983 g004
Figure 5. An example of the image-based rendering effect.
Figure 5. An example of the image-based rendering effect.
Applsci 15 04983 g005
Figure 6. Crack identification process. (a) Original image. (b) Histogram equalization to enhance contrast. (c) Contrast adjustment to refine details. (d) Binarization for segmentation. (e) Binary filtering to suppress noise. (f) Crack demarcation (highlighted in green) for precise localization.
Figure 6. Crack identification process. (a) Original image. (b) Histogram equalization to enhance contrast. (c) Contrast adjustment to refine details. (d) Binarization for segmentation. (e) Binary filtering to suppress noise. (f) Crack demarcation (highlighted in green) for precise localization.
Applsci 15 04983 g006
Figure 7. Image projection results: (a) row projection, (b) column projection.
Figure 7. Image projection results: (a) row projection, (b) column projection.
Applsci 15 04983 g007
Figure 8. Flowchart for constructing a large-scene 3DRS model.
Figure 8. Flowchart for constructing a large-scene 3DRS model.
Applsci 15 04983 g008
Figure 9. Component-level 3DRS model.
Figure 9. Component-level 3DRS model.
Applsci 15 04983 g009
Figure 10. Web platform to display 3D model fusion results.
Figure 10. Web platform to display 3D model fusion results.
Applsci 15 04983 g010
Figure 11. System architecture of the 3DRS-enhanced GNSS/intelligent vision surface deformation monitoring system. Based on the Cesium framework, Spring Boot Framework, MySQL, and Tomcat, this system implements modules such as map base management, spatial analysis tools, multi-source monitoring, intelligent vision, and site selection.
Figure 11. System architecture of the 3DRS-enhanced GNSS/intelligent vision surface deformation monitoring system. Based on the Cesium framework, Spring Boot Framework, MySQL, and Tomcat, this system implements modules such as map base management, spatial analysis tools, multi-source monitoring, intelligent vision, and site selection.
Applsci 15 04983 g011
Figure 12. A screenshot depicting the map base management module of the visualization system. The upper menu includes buttons for InSAR, Verification, Field Survey, Multi-source IoT Monitoring, Basemap, Layers, and Tools. The right-side menu manages 3D Models, Remote Sensing Imagery, and Maps.
Figure 12. A screenshot depicting the map base management module of the visualization system. The upper menu includes buttons for InSAR, Verification, Field Survey, Multi-source IoT Monitoring, Basemap, Layers, and Tools. The right-side menu manages 3D Models, Remote Sensing Imagery, and Maps.
Applsci 15 04983 g012
Figure 13. Screenshots of the multi-source monitoring module: (a) GNSS data, (b) temperature data, (c) humidity data, (d) rainfall data, (e) wind speed data, (f) wind direction data, (g) atmospheric pressure data, (h) PM2.5 data.
Figure 13. Screenshots of the multi-source monitoring module: (a) GNSS data, (b) temperature data, (c) humidity data, (d) rainfall data, (e) wind speed data, (f) wind direction data, (g) atmospheric pressure data, (h) PM2.5 data.
Applsci 15 04983 g013
Figure 14. Screenshots of the statement downloads module.
Figure 14. Screenshots of the statement downloads module.
Applsci 15 04983 g014
Figure 15. Intelligent vision module: (a) enhanced virtual environment, (b) video monitoring and remote light control, (c) information on crack parameters.
Figure 15. Intelligent vision module: (a) enhanced virtual environment, (b) video monitoring and remote light control, (c) information on crack parameters.
Applsci 15 04983 g015
Figure 16. Remote light control process.
Figure 16. Remote light control process.
Applsci 15 04983 g016
Figure 17. The display interface of the highway slope monitoring system features a top menu including buttons for Historical Alerts, InSAR, GNSS, Environmental Monitoring, Base Map, Layers, and Tools. The figure shows (a) the system page for monitoring station 1, monitoring station 2, and monitoring station 3 and (b) the system page for monitoring station 4, monitoring station 5, and monitoring station 6.
Figure 17. The display interface of the highway slope monitoring system features a top menu including buttons for Historical Alerts, InSAR, GNSS, Environmental Monitoring, Base Map, Layers, and Tools. The figure shows (a) the system page for monitoring station 1, monitoring station 2, and monitoring station 3 and (b) the system page for monitoring station 4, monitoring station 5, and monitoring station 6.
Applsci 15 04983 g017
Table 1. Parameters of meteorological sensors.
Table 1. Parameters of meteorological sensors.
ParameterMeasuring RangeUnitAccuracy
Temperature−40~60°C±0.5 (25 °C)
Humidity0~100%RH±3% (60% RH, 25 °C)
Wind speed0~60m/s0.2 ± 0.02 V (0~30 m/s, 25 °C)
Wind direction0~360°±2
Rainfall024mm≤FS ± 5%
Atmospheric pressure30~110kPa±0.1
Table 2. GNSS database table design.
Table 2. GNSS database table design.
NumberFieldData TypePrimary KeyIntroduction
1IDIntNot NullMarking
2BDoubleNot NullLatitude
3LDoubleNot NullLongitude
4HDoubleNot NullElevation
5StationNameVarcharNot NullSite name
6TimeDatetimeNot NullData collection time
7XDoubleNot NullHorizontal position
8YDoubleNot NullVertical position
9SxDoubleNot NullHorizontal displacement velocity
10SyDoubleNot NullVertical displacement velocity
11ShDoubleNot NullPerpendicular displacement velocity
12AxDoubleNot NullHorizontal displacement acceleration
13AyDoubleNot NullVertical displacement acceleration
14AhDoubleNot NullPerpendicular displacement acceleration
Table 3. Meteorological database table design.
Table 3. Meteorological database table design.
NumberFieldData TypePrimary KeyIntroduction
1SNIntNot NullDevice Serial Number
2TemperatureVarcharNot NullTemperature
3HumidityVarcharNot NullHumidity
4PressureVarcharNot NullAir Pressure
5Wind_speedVarcharNot NullWind Speed
6Wind_directionVarcharNot NullWind Direction
7RainfallVarcharNot NullRainfall
8Pm 2.5VarcharNot NullPM2.5 concentration
9GettimeDatetimeNot NullData Receiving Time
Table 4. System latency test results.
Table 4. System latency test results.
Measurement DimensionMethodToolDelay Results (Mean ± SD)
End-to-end delayTimestamp synchronizationNetwork debuggerNo-load: 215 ± 12 ms
Load: 380 ± 45 ms
Network transmission delayICMP RTT testingPing tool100–150 ms
Load response delayPressure testingData monitoring platformDelay growth rate: 76.7%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

He, Y.; Yang, W.; Su, Q.; He, Q.; Li, H.; Lin, S.; Zhu, S. Three-Dimensional Real-Scene-Enhanced GNSS/Intelligent Vision Surface Deformation Monitoring System. Appl. Sci. 2025, 15, 4983. https://doi.org/10.3390/app15094983

AMA Style

He Y, Yang W, Su Q, He Q, Li H, Lin S, Zhu S. Three-Dimensional Real-Scene-Enhanced GNSS/Intelligent Vision Surface Deformation Monitoring System. Applied Sciences. 2025; 15(9):4983. https://doi.org/10.3390/app15094983

Chicago/Turabian Style

He, Yuanrong, Weijie Yang, Qun Su, Qiuhua He, Hongxin Li, Shuhang Lin, and Shaochang Zhu. 2025. "Three-Dimensional Real-Scene-Enhanced GNSS/Intelligent Vision Surface Deformation Monitoring System" Applied Sciences 15, no. 9: 4983. https://doi.org/10.3390/app15094983

APA Style

He, Y., Yang, W., Su, Q., He, Q., Li, H., Lin, S., & Zhu, S. (2025). Three-Dimensional Real-Scene-Enhanced GNSS/Intelligent Vision Surface Deformation Monitoring System. Applied Sciences, 15(9), 4983. https://doi.org/10.3390/app15094983

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop