Next Article in Journal
Simulation of Pantograph–Catenary Arc Temperature Field in Urban Railway and Study of Influencing Factors on Arc Temperature
Previous Article in Journal
Mechanical Response Mechanism and Acoustic Emission Evolution Characteristics of Deep Porous Sandstone
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Structural Deflection Measurement with a Single Smartphone Using a New Scale Factor Calibration Method

1
School of Science, China University of Geosciences, Beijing 100083, China
2
National Key Laboratory of Strength and Structural Integrity, School of Aeronautical Science and Engineering, Beihang University, Beijing 100191, China
3
National Key Laboratory of Strength and Structural Integrity, Aircraft Strength Research Institute of China, Xi’an 710065, China
*
Author to whom correspondence should be addressed.
Infrastructures 2025, 10(9), 238; https://doi.org/10.3390/infrastructures10090238
Submission received: 28 July 2025 / Revised: 2 September 2025 / Accepted: 5 September 2025 / Published: 10 September 2025

Abstract

This study proposes a novel structural deflection measurement method using a single smartphone with an innovative scale factor (SF) calibration technique, eliminating reliance on laser rangefinders and industrial cameras. Conventional off-axis digital image correlation (DIC) techniques require laser rangefinders to measure discrete points for SF calculation, suffering from high hardware costs and sunlight-induced ranging failures. The proposed approach replaces physical ranging by deriving SF through geometric relationships of known structural dimensions (e.g., bridge length/width) within the measured plane. A key innovation lies in developing a versatile SF calibration framework adaptable to varying numbers of reference dimensions: a non-optimized calculation integrates smartphone gyroscope-measured 3D angles when only one dimension is available; a local optimization model with angular parameters enhances accuracy for 2–3 known dimensions; and a global optimization model employing spatial constraints achieves precise SF resolution with ≥4 reference dimensions. Indoor experiments demonstrated sub-0.05 m ranging accuracy and deflection errors below 0.30 mm. Field validations on Beijing Subway Line 13′s bridge successfully captured dynamic load-induced deformations, confirming outdoor applicability. This smartphone-based method reduces costs compared to traditional setups while overcoming sunlight interference, establishing a hardware-adaptive solution for vision-based structural health monitoring.

1. Introduction

The accurate measurement of deflection/displacement is of critical importance for the quality assessment of civil infrastructure and engineering structures and is essential for evaluating structural safety and facilitating regular maintenance. In practice, structural deflection/displacement can be measured using traditional contact-based sensors (e.g., dial gauges, LVDT) as well as emerging non-contact sensors (e.g., vision-based displacement sensors, GPS, laser vibrometers, and radar interferometry systems). Among these measurement techniques, vision-based optical techniques have gained widespread attention and increasing application due to their distinct advantages, including, but not limited to, their low cost, ease of setup, multi-point real-time measurement capabilities with visualization features, and a wide range of resolution options and applicability [1,2,3,4,5,6].
The implementation of a structural deflection/displacement measurement using vision-based sensors involves four consecutive steps: (1) video image acquisition, (2) camera calibration, (3) image matching, and (4) physical displacement calculation. Based on the captured high-quality images, the physical displacements of measurement points on the surface of the test object can be computed by the multiplication of the scale factor (SF) and the corresponding image displacements under image coordinates. With the sophistication of image acquisition and image matching methods, camera calibration, used to determine the SF at discrete measurement points, has become a critical aspect of vision-based deflection/displacement measurement.
In previous studies on deflection/displacement measurement, the treatment of SF has not been extensively explored [7,8]. For measurements within a small field of view, it is typical to adopt either a globally uniform SF calibration method or to use a calibration target [9,10]. While such approaches are often sufficient in confined settings, they become inadequate for large-scale outdoor applications, particularly in the case of bridge deflection monitoring over wide areas, where factors such as perspective distortion impose stricter precision requirements. To address this limitation, our previous research developed a method for calibrating SF at discrete measurement points using a laser rangefinder, which has proven to be practical and effective in outdoor environments. By integrating this laser rangefinder-assisted SF calibration technique with off-axis digital image correlation (off-axis DIC), an advanced video deflectometer was established [11], capable of real-time multi-point deflection and displacement measurements over remote distances. Furthermore, the technique was enhanced by employing light-emitting targets instead of natural textures for measurements exceeding 100 m [12], enabling precise deflection measurements at six critical positions along the second and third spans of the Wuhan Yangtze River Bridge under varying operational conditions. Recently, to further reduce hardware costs, an innovative long-distance measurement technique has been introduced to use smartphones as an alternative to industrial cameras [13]. This cost-effective solution achieved the real-time monitoring of Beijing Subway Line 13′s bridge using only a smartphone paired with a laser rangefinder.
Nevertheless, the outdoor deployment of laser rangefinders often suffers ranging failures due to interference from ambient light. Moreover, in many real scenarios such as highways or riverbanks, the measurement device may be placed hundreds or even thousands of meters away from the targeted measurement points. This amount of distance exceeds the operational range of most laser rangefinders, leading to the failure of deflection measurements. The use of laser rangefinders for SF determination has become a major limitation of existing video deflectometers, and a simpler SF calibration method without the use of a laser rangefinder is therefore highly desirable.
Gyroscope-assisted calibration methods have been extensively studied in various domains, including visual–inertial joint calibration, dynamic environmental calibration, deep learning-aided calibration, and low-cost sensor calibration [14,15]. Notably, rapid calibration techniques for devices such as smartphones and drones have sparked a research boom since 2016. Since gyroscopes are inherently embedded in these devices, no additional auxiliary equipment is required, significantly simplifying the calibration process and enhancing its practicality. In 2016, Rehder et al. publicly released the open-source calibration tool Kalibr and elaborated on its methodology, which supports offline calibration for low-cost cameras and IMUs. This tool has become a widely adopted benchmark in both academia and industry [16]. Building on this, Rehder et al. [17] proposed a method that eliminates the need for specialized calibration fixtures. Instead, users can complete camera–IMU joint calibration through freehand device motion, drastically reducing hardware costs. In 2021, Yang et al. [18] introduced a self-calibration approach for drones and mobile robots that leverages natural scene features, enabling the dynamic calibration of low-cost IMUs and cameras in unstructured environments. Concurrently, Geneva et al. [19] developed a visual–inertial odometry framework that bypasses prior calibration requirements by optimizing sensor parameters in real-time during motion. Further advancing the field, Wang et al. [20] utilized deep learning to directly estimate the relative pose between LiDAR and cameras from raw point cloud and image data, eliminating the need for calibration boards or manual intervention. In 2023, Li et al. [21] proposed a single-shot calibration procedure using smartphones, where IMU intrinsic parameters are estimated through natural handheld user motions. These studies collectively demonstrate that three-dimensional calibration methods utilizing gyroscopes have matured into feasible low-cost solutions. However, research on two-dimensional calibration methods based on gyroscopes with an off-axis remains scarce.
In this work, based on our recently established cost-effective and ultraportable smartphone-based vision system for structural deflection monitoring [13], we propose a simpler, low-cost, and effective structural deflection measurement method that only uses a smartphone, eliminating the need for additional auxiliary equipment (i.e., laser rangefinders). A novel SF calibration method is proposed to calculate the distance from any measurement point on a measured plane to the optical center of the camera by utilizing several known dimensions within that plane (e.g., bridge length). This SF calibration method takes full advantage of the advanced sensors (i.e., gyroscopes) integrated into modern smartphones and eliminates the need for ranging tasks performed by laser rangefinders. The overall framework for conducting structural displacement measurements using a single smartphone is first outlined, followed by a detailed description of the proposed novel SF calibration method. Finally, the effectiveness and accuracy of the proposed method are validated through indoor simulated displacement experiments as well as the field experiment on railway bridge deflection measurement.

2. Methods

2.1. Principle of Smartphone-Based Structural Deflection Measurement

The method for measuring structural deflection/displacement utilizing smartphones is illustrated in Figure 1. The measurement process comprises four steps: (1) Image acquisition: The smartphone should be securely mounted on a tripod and positioned correctly. The camera’s pitch angle should be adjusted to ensure that the camera remains horizontal and captures a clear image of the measured structure. Subsequently, video or digital images of the target object are acquired using the integrated camera. (2) SF determination: Based on the distances from each measurement point to the optical center of the smartphone’s camera, SF values at each measurement point are determined. These values are established through an off-axis imaging model, which defines the proportional relationship between image displacement and actual displacement. (3) Image displacement calculation: Prior to loading the reference image, it is essential to identify points of interest that exhibit characteristics indicative of a random gray-scale distribution, including natural textures or artificial markings. Then, by using DIC, the position of each measurement point is tracked across subsequent images to obtain image displacements. (4) Conversion from image displacement to actual deflection/displacement: The structural deflection/displacement is then calculated based on the determined SF and image displacements.
As mentioned above, the two key stages in measuring structural deflection/displacement using a smartphone are the determination of SF coefficients and the calculation of image displacement at each target point. In this work, the well-accepted Inverse Compositional–Gauss Newton (IC-GN) algorithm is used to achieve displacement tracking [22]. For brevity, the principle of the IC-GN algorithm is not repeated here. This work focuses on the calibration of SF at each measurement point. In the imaging model of off-axis DIC, the SF at each measurement point that depicts the relationship between the actual displacement and the image displacement can be expressed as
k = L x x c 2 + y y c 2 l p s 2 + f 2 l p s cos θ
where x , y represent the image coordinates of the point to be measured, x c , y c are the coordinates of the image center point, f is the focal length of the lens, l p s is the physical size of a single pixel in the camera, θ represents the vertical tilt angle of the smartphone, and L is the distance from the optical center of the camera to the point to be measured.
Except for L , all parameters in Equation (1) can be obtained from the camera and orientation sensor integrated into the smartphone. The existing SF calibration method [11] employs a laser rangefinder to determine the distance L from each measurement point to the optical center of the camera. Consequently, the laser rangefinder is an indispensable instrument for the off-axis calibration method. However, the use of a laser rangefinder presents three disadvantages: (1) The use of a laser rangefinder in outdoor settings is susceptible to inaccurate measurements due to the interference of bright sunlight. (2) The laser rangefinder has a limited operation distance and is relatively expensive. (3) For multiple measurement points, the process of operating the laser rangefinder to measure and record the distance of each point is time-consuming. To address these shortcomings, this work proposes a simple SF calibration method, which calculates the distance L using the known dimensions of the measured structure and the gyroscope angle data in the smartphone. This method enables fast SF calibration and real-time multi-point displacement measurement while avoiding the use of a laser rangefinder.

2.2. Calibration Method of Off-Axis Without Auxiliary Equipment

The proposed SF calibration method requires two key prerequisites: (1) the measured points must lie on a vertical plane (a requirement shared with off-optical axis calibration models); (2) at least one dimension of the structure on the measured plane is known. Additional known dimensions can improve the calibration accuracy. Therefore, if fewer than four dimensions are available, the angle between the camera plane and the measured plane must be provided as supplementary information.
For cases where fewer than four structural dimensions are known, the SF calibration method involves the following steps: First, position the smartphone parallel to the measurement plane, typically the vertical (plumb) plane, and record its initial position using built-in gyroscope software. Then, adjust the smartphone’s position to an optimal location where the camera can clearly capture both the target point and the structural feature with known dimensions. At this stage, capture a reference image to obtain the image coordinates of the known structure and record the 3D angular data from the gyroscope. Finally, use these data to calculate the distance L to the measured point. The subsequent section details the methodology for deriving SF values for each measured point.
To estimate the distance L from each measured point to the optical center of the camera, a distance measurement model is established, as illustrated in Figure 2. In this model, O u v denotes the image coordinate system and O c X Y Z denotes the camera coordinate system. The image coordinate system is defined with its origin at the upper-left vertex of the image plane. The image coordinate system employs a pixel-based representation, wherein points are denoted by the triplet u , v , 1 T . The u -axis and v -axis are aligned with the two edges of the image plane, respectively. In the camera coordinate system, the origin is defined at the optical center of the camera. The X -axis and Y -axis are parallel to the u -axis and v -axis of the image coordinate system, respectively, and the Z -axis represents the optical axis of the camera. Points in the camera coordinate system are denoted by X , Y , Z T . On the camera plane, the mathematical expression of the coordinate transformation between the image coordinate and camera coordinate is given by Equation (2).
X Y Z = l p s 0 u 0 l p s 0 l p s v 0 l p s 0 0 f u v 1
where f represents the focal length of the lens, l p s is the physical size of a single pixel, and u 0 and v 0 refer to the horizontal and vertical center coordinates of the image. These values are typically half the image resolution in the corresponding direction, expressed in pixels. In the camera coordinate, the function of the camera plane is known to be Z = f , and the normal vector is n 1 = 0 , 0 , 1 T .
The built-in gyroscope software of the smartphone measures the 3D angle α , β , γ T between the two planes, which indicates that the smartphone initially rotates γ around the X -axis, β around the Y -axis, and α around the Z -axis. Here, the counterclockwise direction is set as positive. It is noteworthy that the zero value of α represents the spatial alignment of the camera plane and the measured plane in the optical axis. Consequently, during the measurement process, the value of α must be maintained at zero (≤±0.1°), which means that the smartphone does not rotate around the Z -axis. The normal vector n 2 of the measured plane can be expressed as follows:
n 2 = cos β sin β sin γ sin β cos γ 0 cos γ sin γ sin β cos β sin γ cos β cos γ n 1 = sin β cos γ , sin γ , cos β cos γ T
In the measured plane, there are m known feature points M 1 , M i , M j , M m 1 < i < j < m . The distance between feature points M i M j is known. The pixel coordinates of each feature point m i in the camera plane are given by u i , v i . According to Equation (2), the coordinates of m i in the camera coordinate are given by m i = u i u 0 l p s , v i v 0 l p s , f T , which may be simplified as m i = m i x , m i y , f T .
The function of the measured plane can be assumed as A x + B y + C z + D = 0 ; therefore, the corresponding normal vector of the measured plane is denoted as n 2 = A , B , C T , derived from Equation (3). Consequently, the function of the plane contains only one unknown parameter D , which can be solved by expressing the known feature point distance M i M j ^ as a function of the parameter D . Since O C m i and O C M i are collinear, O C M i can be expressed as k i O C m i . Therefore, the coordinates of M i can be expressed as M i = k i m i x , k i m i y , k i f T . Given that M i is situated on the measured plane, the coordinates of M i satisfy the function of the measured plane. Consequently, the expression of the unknown parameter k i with respect to the parameter D can be derived as k i = D / A m i x + B m i y + C f . Furthermore, the distance between two feature points M i and M j on the measured plane is M i M j ^ = k i m i x k j m j x 2 + k i m i y k j m j y 2 + k i f k j f 2 .
When the distance between two feature points M i M j is known, it is possible to equate this value with the calculated estimate M i M j = M i M j ^ , thereby enabling the parameters D to be resolved directly. If more than one but less than four characteristic points are known, the Levenberg–Marquardt (L-M) algorithm can be employed to ascertain the parameters D by optimizing the following equation:
D = arg min D M i M j ^ M i M j 2
when more than four distances are known, global optimization can be employed to determine the four parameters of the measured plane to enhance the calculation accuracy. By establishing an objective function expressed as the square of the distance, the L-M algorithm can be used for the following equation:
A , B , C , D T = arg min A , B , C , D M i M j ^ 2 M i M j 2 2
Once the spatial equation of the measured plane has been obtained, the distance from any point on the measured structure to the optical center of camera can be calculated. For point M with image coordinates u , v on the measured plane, the distance from point M to the optical center of the camera can be calculated using Equation (6). In this equation, A , B , C , D denote the parameters of the measured plane function. The horizontal and vertical coordinates of the image center are represented by u 0 , v 0 , respectively. Additionally, f represents the focal length of the lens, and l p s represents the physical size of a single pixel in the camera.
L = O C M = D u u 0 2 l p s 2 + v v 0 2 l p s 2 + C 2 f 2 A u u 0 l p s + B v v 0 l p s + C f
Once the distance L from each measured point to the optical center of the camera has been obtained, the SF for each point can be calculated using Equation (1). Subsequently, the IC-GN algorithm can be employed to determine the image displacement of each point in the pixels. Finally, the image displacement can be converted into actual physical displacement in millimeters based on the previously determined SF values.

3. Verification Experiments

3.1. Distance Verification Experiments

To compare the distances measured by the proposed method with those obtained from a laser rangefinder, validation experiments were conducted. As shown in Figure 3, the experimental setup comprises a calibration plate, a smartphone, a tripod, and a laser rangefinder. The calibration plate has a side length of 1 m, with the center-to-center distance between adjacent circles measuring 80 mm, and it is firmly mounted vertically on a vibration isolation platform. It is worth noting that the calibration target is used here as a flat object with predefined marker spacing, rather than for employing a calibration target-based calibration method. In practical bridge measurement experiments, a calibration target is not required; marked points with a known distance between them are sufficient. A smartphone (model: Honor 20; camera resolution: 1280 × 720 pixels) is fixed to a tripod for stability during measurements. The bias stability of the gyroscope in smartphones is between ±2°/h and ±10°/h. A laser rangefinder (model: Bosch 200; measurement range: up to 200 m; accuracy: ±0.200 mm) was utilized to measure the distance from the optical center of the camera to the target point, thereby facilitating the comparison of the calculated results.
The smartphone was aligned parallel to the target measurement plane and interfaced with the embedded gyroscopic application (Gyroscope Explorer, version 2.0.5). The triaxial angular coordinates within the application were calibrated to establish baseline references. Upon achieving stabilized instrument readings (<0.5° fluctuation/min), the initial angular orientation was registered as (0.1°, 0.2°, 0.5°). The device was then systematically repositioned to an optimized measurement configuration. After 120 s of quiescent stabilization, the final angular coordinates were quantified as (0.1°, 17.1°, 20.2°). Subsequent vector subtraction revealed angular deviations of (0.0°, 16.9°, 19.7°), corresponding to the dihedral angles between the smartphone imaging plane and the reference measurement surface. Feature points were selected to simulate structures with known dimensions alongside measured points P 1 , P 2 , and P 3 , as shown in Figure 4. Specifically, two green points were served as feature points for solution without optimization, three red points were used for local parameter optimization, and nine blue points were selected for global parameter optimization. The distances from the optical center of the smartphone camera to points P 1 , P 2 , and P 3 were measured using a laser rangefinder, with values of 4.348 m, 4.435 m, and 4.393 m, respectively, for subsequent validation of the measurement results.
The distances from the optical center of the smartphone camera to the measured points P 1 , P 2 , and P 3 were calculated using three methods: solution without optimization, local parameter optimization, and global parameter optimization. The results are summarized in Table 1. By referring to laser rangefinder measurements as the benchmark, it is evident that the solution without the optimization method exhibits the largest error, with the discrepancy for point P2 reaching −0.667 m. In contrast, the local parameter optimization significantly improves the measurement accuracy, yielding a maximum error of −0.256 m. When an adequate number of feature points are captured in the image, global parameter optimization further reduces the maximum error to −0.042 m, which is less than 1% of the reference value. These results suggest that the hierarchy of accuracy for determining the distances from the measured points to the camera’s optical center is as follows: global parameter optimization > local parameter optimization > solution without optimization. This variation in accuracy correlates with the amount of known dimensional information. Specifically, the solution without the optimization method requires only two feature points, while local optimization methods require a minimum of three feature points, and global optimization methods require at least four feature points. In practical applications, the choice of the most precise method should be contingent on the extent of dimensional information available at each specific location.

3.2. Deflection Verification Experiment

As illustrated in Figure 5, a speckle pattern was affixed to the slider of the precision displacement control platform to serve as the target points for the measurement. The slider was programmed to descend vertically in five discrete steps, with each step corresponding to a 2 mm displacement, culminating in an overall movement of 10 mm. This process was recorded using smartphone recording software (mcpro24fps, version 041aq) at a frame rate of 30 fps. The initial frame from the recorded sequence was utilized as the reference image for subsequent comparison and calculations. The distance between the optical center of the camera and the target points was calculated by integrating plane equation parameters through global optimization, yielding an SF for the target of 4.877 mm/pixel.
The time–displacement curve of the target exhibits fluctuations around the zero line both before and after the slider movement, as shown in Figure 6. During the slider’s operation, which involves a displacement of 2 mm per step over five steps, the data remain stable, with minimal noise interference. Table 2 indicates that within this 2 mm displacement interval, the maximum absolute error is 0.296 mm, while the maximum relative error is less than 3%. These results substantiate the accuracy of the proposed displacement measurement method. Furthermore, systematic error analysis identifies that the simplified computational approach in our off-axis calibration model introduces inherent limitations in terms of the SF determination. This error propagation mechanism explains the observed progressive amplification of displacement measurement inaccuracies with increasing target displacement magnitude, suggesting potential optimization pathways for future calibration improvements.

4. Field Experiment

Beijing Subway Line 13 is the third urban rail transit line to be successfully completed and operationalized in Beijing. Reports indicate that the daily passenger volume on Line 13 can reach up to 300,000 persons–trips, underscoring the critical role that it plays in the urban transportation network of the city [23]. In this study, we utilized only a smartphone and a tripod, without any additional auxiliary equipment, to measure the deflection/displacement of the segment between Wudaokou Station and Zhichunlu Station under full-load conditions during morning peak hours.
The experimental setup is depicted in Figure 7. To ensure system stability, the tripod was secured into the soil. A smartphone was firmly mounted on a tripod and positioned on the horizontal road surface beneath the subway bridge. Adjustments were made to optimize image clarity before initiating video recording, which commenced prior to the subway’s arrival and concluded after its departure. Two targets were placed along the side of the bridge at locations corresponding to one-quarter span and mid-span points. Prior to conducting the experiments, we measured the 3D angle between the measured plane and the smartphone camera plane, which was found to be 0.0 , 26.6 , 30.5 . Given that the thickness of the bridge slab is 360 mm and the width of the external metal railing, which is outside the sound insulation, is 100 mm, these dimensions were utilized to calculate the distances from the targets to the optical center of the smartphone camera. The calculated distances from Target 1 and Target 2 were 7.483 m and 9.085 m, respectively. Using Equation (1), the SFs for Targets 1 and 2 were determined to be approximately 8.613 mm/pixel and 10.068 mm/pixel, respectively.
As shown in Figure 8, during the subway’s passage over the bridge, Target 1 exhibits a relatively large displacement with an average value of 1.592 mm, while Target 2 shows an average displacement of 1.213 mm. Notably, as the subway approaches the bridge, both targets exhibit upward movement due to the uneven load distribution across the bridge structure. When the subway is fully positioned on the bridge, an increased deflection is observed near the mid-span. These findings are consistent with actual observations and validate the effectiveness of the method employed in this study for measuring structural displacement.

5. Conclusions

A novel smartphone-based method for structural deflection/displacement measurement is proposed, which does not require auxiliary devices. The primary innovation of this method lies in its ability to perform the entire measurement process using only a smartphone, without the need for a laser rangefinder. Indoor verification experiments confirmed that the distance from the measured point to the camera’s optical center obtained through this method closely matches the results from a laser rangefinder, with a maximum ranging error of less than 0.05 m. Additionally, deflection/displacement measurements were consistent with theoretical values, with a maximum error of less than 0.30 mm. The dynamic load measurement experiment conducted on the Beijing Subway Line 13 bridge further validated the effectiveness of the proposed method. By using only a single smartphone, the proposed method significantly enhances the portability and flexibility of the deflection measurements. The simplicity and low cost of this smartphone-based system make it readily accessible and independent of external conditions, providing accurate deflection measurements. These advantages make the method particularly valuable in resource-constrained scenarios.

Author Contributions

The overarching research goals were developed by L.T. and L.Y., L.T. and Y.Y. provided the experimental methods and data and analyzed the experimental data. X.Z. oversaw the overall project progress. The initial draft of the manuscript was written by L.T. and Y.Y., L.T. and L.Y. revised and reviewed the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Key Research and Development Program of China (Grant no. 2024YFF0618900) and the Open Fund for the National Key Laboratory of Strength and Structural Integrity (Grant no. ASSIKFJJ202304004).

Acknowledgments

We would like to thank all the scholars in the references for their contributions to their respective research fields, as well as the open source data provided by various data websites.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Choi, H.S.; Cheung, J.H.; Kim, S.H.; Ahn, J.H. Structural dynamic displacement vision system using digital image processing. NDT E Int. 2011, 44, 597–608. [Google Scholar] [CrossRef]
  2. Ho, H.N.; Lee, J.H.; Park, Y.S.; Lee, J.J. A synchronized multipoint vision-based system for displacement measurement of civil infrastructures. Sci. World J. 2012, 2012, 519146. [Google Scholar] [CrossRef] [PubMed]
  3. Ye, X.; Ni, Y.; Wai, T.; Wong, K.Y.; Zhang, X.M.; Xu, F.J. A vision-based system for dynamic displacement measurement of long-span bridges: Algorithm and verification. Smart Struct. Syst. 2013, 12, 363–379. [Google Scholar] [CrossRef]
  4. Brownjohn, J.; Xu, Y.; Hester, D. Vision-Based Bridge Deformation Monitoring. Front. Built Environ. 2017, 3, 23. [Google Scholar] [CrossRef]
  5. Khuc, T.; Catbas, F.N. Computer vision-based displacement and vibration monitoring without using physical target on structures. Struct. Infrastruct. Eng. 2017, 13, 505–516. [Google Scholar] [CrossRef]
  6. Feng, D.; Feng, M.Q. Experimental validation of cost-effective vision based structural health monitoring. Mech. Syst. Signal Process. 2017, 88, 199–211. [Google Scholar] [CrossRef]
  7. Kromanis, R.; Xu, Y.; Lydon, D.; Martinez del Rincon, J.; Al-Habaibeh, A. Measuring Structural Deformations in the Laboratory Environment Using Smartphones. Front. Built Environ. 2019, 5, 44. [Google Scholar] [CrossRef]
  8. Du, W.; Lei, D.; Zhu, F.; Bai, P.; Zhang, J. A non-contact displacement measurement system based on a portable smartphone with digital image methods. Struct. Infrastruct. Eng. 2022, 20, 1322–1340. [Google Scholar] [CrossRef]
  9. Peng, Z.; Li, J.; Hao, H. Computer vision-based displacement identification and its application to bridge condition assessment under operational conditions. Smart Constr. 2024, 1, 3. [Google Scholar] [CrossRef]
  10. Peng, Z.; Li, J.; Hao, H.; Zhong, Y. Smart structural health monitoring using computer vision and edge computing. Eng. Struct. 2024, 319, 118809. [Google Scholar] [CrossRef]
  11. Pan, B.; Tian, L.; Song, X. Real-time, non-contact and targetless measurement of vertical deflection of bridges using off-axis digital image correlation. NDT E Int. 2016, 79, 73–80. [Google Scholar] [CrossRef]
  12. Tian, L.; Pan, B. Remote Bridge Deflection Measurement Using an Advanced Video Deflectometer and Actively Illuminated LED Targets. Sensors 2016, 16, 1344. [Google Scholar] [CrossRef] [PubMed]
  13. Tian, L.; Zhang, X.; Pan, B. Cost-Effective and Ultraportable Smartphone-Based Vision System for Structural Deflection Monitoring. J. Sens. 2021, 2021, 8843857. [Google Scholar] [CrossRef]
  14. Li, M.; Mourikis, A.I. Online temporal calibration for camera-IMU systems: Theory and algorithms. Int. J. Robot. Res. 2014, 33, 947–964. [Google Scholar] [CrossRef]
  15. Li, M.; Mourikis, A.I. 3-D Motion Estimation and Online Temporal Calibration for Camera-IMU Systems. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Karlsruhe, Germany, 6–10 May 2013; pp. 5709–5716. [Google Scholar]
  16. Rehder, J.; Nikolic, J.; Schneider, T.; Siegwart, R. A General Approach to Spatiotemporal Calibration in Multi-sensor Systems. IEEE Trans. Robot. 2016, 32, 383–398. [Google Scholar] [CrossRef]
  17. Rehder, J.; Nikolic, J.; Schneider, T.; Hinzmann, T.; Siegwart, R. Extending kalibr: Calibrating the extrinsics of multiple IMUs and of individual axes. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 4304–4311. [Google Scholar]
  18. Yang, Z.; Shen, S. Autocalibration of low-cost IMU and camera for navigation in cluttered environments. IEEE Sen. J. 2021, 21, 15625–15636. [Google Scholar]
  19. Geneva, P.; Eckenhoff, K.; Lee, W.; Yang, Y.; Huang, G. OpenVINS: A Research Platform for Visual-Inertial Estimation. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 4666–4672. [Google Scholar]
  20. Wang, W.; Nobuhara, S.; Nakamura, R.; Sakurada, K. SOIC: Semantic Online Initialization and Calibration for LiDAR and Camera. arXiv 2020, arXiv:2003.04260. [Google Scholar] [CrossRef]
  21. Li, X.; Li, M.; Liu, Y. One-shot calibration of low-cost inertial sensors using a smartphone. Sensors 2023, 23, 1989. [Google Scholar]
  22. Pan, B.; Li, K.; Tong, W. Fast, robust and accurate digital image correlation calculation without redundant computations. Exp. Mech. 2013, 53, 1277–1289. [Google Scholar] [CrossRef]
  23. Sina.com. Beijing Subway Line 13 Records Daily Passenger Volume Exceeding 300,000 for the First Time. Available online: https://news.sina.com.cn/c/2007-10-04/015414020510.shtml (accessed on 10 January 2025).
Figure 1. Schematic of the proposed smartphone-based structural deflection measurement method.
Figure 1. Schematic of the proposed smartphone-based structural deflection measurement method.
Infrastructures 10 00238 g001
Figure 2. Schematic diagram of distance measurement model.
Figure 2. Schematic diagram of distance measurement model.
Infrastructures 10 00238 g002
Figure 3. Field diagram of indoor deflection/distance measurement experiment.
Figure 3. Field diagram of indoor deflection/distance measurement experiment.
Infrastructures 10 00238 g003
Figure 4. Schematic diagram of feature points selected by the three methods.
Figure 4. Schematic diagram of feature points selected by the three methods.
Infrastructures 10 00238 g004
Figure 5. Field diagram of indoor deflection/displacement verification experiment.
Figure 5. Field diagram of indoor deflection/displacement verification experiment.
Infrastructures 10 00238 g005
Figure 6. Time–displacement curve of the measured point on the precision control displacement platform.
Figure 6. Time–displacement curve of the measured point on the precision control displacement platform.
Infrastructures 10 00238 g006
Figure 7. Field diagram of deflection/displacement measurement of Beijing Subway Line 13.
Figure 7. Field diagram of deflection/displacement measurement of Beijing Subway Line 13.
Infrastructures 10 00238 g007
Figure 8. Time–displacement curves of two targets to be measured.
Figure 8. Time–displacement curves of two targets to be measured.
Infrastructures 10 00238 g008
Table 1. Distance from the measured point to the optical center of the smartphone camera (unit: m).
Table 1. Distance from the measured point to the optical center of the smartphone camera (unit: m).

Method
Reference Value
(Laser Rangefinder)
Solution Without OptimizationLocal Parameter Optimization Global Parameter Optimization
Measured
Point
Measured ValueErrorMeasured ValueErrorMeasured ValueError
P 1 4.3483.998−0.3504.4350.0874.333−0.015
P 2 4.4353.768−0.6674.179−0.2564.393−0.042
P 3 4.3933.854−0.5394.274−0.1194.384−0.009
Table 2. Displacement data table of the measured point (unit: mm).
Table 2. Displacement data table of the measured point (unit: mm).
ValueReference Value
(Precision Control Displacement Platform)
MeanVarianceError
Step
10.0000.0120.0070.012
2−2.000−2.0720.004−0.072
3−4.000−3.9400.0040.060
4−6.000−6.1050.004−0.105
5−8.000−8.1660.006−0.166
6−10.000−10.2960.008−0.296
70.0000.0560.0060.056
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tian, L.; Yuan, Y.; Yu, L.; Zhang, X. Structural Deflection Measurement with a Single Smartphone Using a New Scale Factor Calibration Method. Infrastructures 2025, 10, 238. https://doi.org/10.3390/infrastructures10090238

AMA Style

Tian L, Yuan Y, Yu L, Zhang X. Structural Deflection Measurement with a Single Smartphone Using a New Scale Factor Calibration Method. Infrastructures. 2025; 10(9):238. https://doi.org/10.3390/infrastructures10090238

Chicago/Turabian Style

Tian, Long, Yangxiang Yuan, Liping Yu, and Xinyue Zhang. 2025. "Structural Deflection Measurement with a Single Smartphone Using a New Scale Factor Calibration Method" Infrastructures 10, no. 9: 238. https://doi.org/10.3390/infrastructures10090238

APA Style

Tian, L., Yuan, Y., Yu, L., & Zhang, X. (2025). Structural Deflection Measurement with a Single Smartphone Using a New Scale Factor Calibration Method. Infrastructures, 10(9), 238. https://doi.org/10.3390/infrastructures10090238

Article Metrics

Back to TopTop