Next Article in Journal
Validation of Inertial Measurement Units for Measuring Lower-Extremity Kinematics During Squat–Pivot and Stoop–Twist Lifting Tasks
Previous Article in Journal
A Scheduling Method for Maintenance Tasks of Damaged Equipment Based on Digital Twin and Robust Optimization
Previous Article in Special Issue
Early Warning of Abnormal Operating Modes via Feature Extraction from Cross-Section Frame at Discharge End for Sintering Process
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Camera 3D Digital Image Correlation with Pointwise-Optimized Model-Based Stereo Pairing

1
Graduate School of Advanced Science and Engineering, Hiroshima University, Hiroshima 739-8527, Japan
2
Digital Monozukuri (Manufacturing) Education and Research Center, Hiroshima University, Hiroshima 739-0046, Japan
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(18), 5675; https://doi.org/10.3390/s25185675
Submission received: 8 August 2025 / Revised: 8 September 2025 / Accepted: 8 September 2025 / Published: 11 September 2025

Abstract

Dynamic deformation measurement (DDM) is critical across infrastructure and industrial applications. Among various advanced techniques, multi-camera digital image correlation (MC-DIC) stands out due to its ability to achieve wide-range, full-field, and non-contact 3D DDM by pairing camera subsystems. However, existing MC-DIC methods typically rely on inefficient manual pairing or a simplistic strategy that aggregates all visible cameras for measuring specific object regions, leading to camera over-grouping. These limitations often result in cumbersome system setup and ill-measured deformations. To overcome these challenges, we propose a novel MC-DIC method with pointwise-optimized model-based stereo pairing (MPMC-DIC). By automatically evaluating and selecting camera pairs based on five evaluation factors derived from 3D model and calibrated cameras, the proposed method overcomes the over-grouping problem and achieves high-precision DDM of semi-rigid objects. A Ø5 × 5 cm cylinder experiment demonstrated an accuracy of 0.03 mm for both horizontal and depth displacements in the 0.0–5.0 mm range, and validated strong robustness against cluttered backgrounds using a 2 × 4 camera array. Vibration measurement of a 9 × 15 × 16 cm PC speaker operating at 50 Hz, using eight surrounding cameras capturing 1920 × 1080 images at 400 fps, confirmed the proposed method’s capability to perform wide-range dynamic deformation analysis and its robustness against complex object geometries.

1. Introduction

Dynamic deformation measurement (DDM) has significant applications such as structural health monitoring [1], industrial maintenance [2], and vehicle refinement [3]. Several contact and non-contact sensors are widely employed for accurate DDM, such as linear variable differential transducers [4], accelerometers [5], fiber optics sensors [6], and laser sensors [7]; yet they all suffer from different problems, like sparse measurements, mass loading, and spatial inconsistency. The rapid advancement in camera resolution has propelled the development of vision-based methods that utilize camera pixels as dense arrays of optical sensors. Vision-based methods are favored for their easy installation, as well as their non-contact, fast, and full-field consistent measurement capabilities. Pixel intensity array-tracking methods, such as digital image correlation (DIC) [8] and template matching [9], are widely adopted for sub-pixel displacement estimation.
Although DIC often requires a speckle pattern applied to the object surface to facilitate intensity matching, it is widely accepted as a non-contact displacement measurement technique, with the advantage of being mass load-free and independent of the tested material or the length scale of interest [10]. DIC was first utilized in the 1980s to monitor aluminum specimens [11], with tracking motion based on photo-consistency between pre- and post-deformation images. Since then, numerous research studies have advanced this technique in terms of refined shape function [12], high-order interpolation [13], enhanced intensity filtering [14], precise and efficient parameter solving [15], and boosted initial guess [16]. To simplify the practical configurations and allow for full-field 3D measurements, 3D-DIC integrated stereo photogrammetry with 2D-DIC was performed in the 1990s [17]. This integration enables 3D DDM using two overlapping images with the support of calibration [18], stereo correspondence [19], and triangulation [20] from stereo photogrammetry. Meanwhile, speckle pattern also enhances calibration based on its rich features and improves measurement precision in the presence of environmental disturbances [21]. Recently, the accuracy of stereo correspondence in 3D-DIC has been significantly enhanced through various strategies, such as image feature description and matching [22], path-guided measurement [23], geometric constrained semi-global matching [24], and model-based projection [25]. Deep learning has further elevated performance by increasing measurement range [26], improving precision, and enabling super-resolution speckle image generation [27]. Based on these advancements, 3D-DIC achieves micrometer-level accuracy by tracking the same patterned subset region across stereo image sequences. However, accurate tracking requires sufficient image overlap between stereo camera views, which inherently restricts the measurable region in 3D-DIC systems [28].
A widely adopted solution for enlarging the measurement region is to use multi-camera systems supported by multi-camera geometry (MCG; also called multi-view geometry). MCG is a technique that is utilized for recovering 3D information from multiple 2D camera images, such as 3D shape measurement [29], 3D DDM [30], 3D position tracking [31], and 3D pose estimation [32]. By capturing images from different locations, MCG strongly compensates for the regions that are difficult to cover by stereo configuration [29], with applications for cultural heritage archival practice, surgical planning, structural health monitoring, crime scene reconstruction, and entertainment [33,34,35,36]. Multi-camera systems are established in two main ways: camera array systems or pseudo-camera systems. Camera array systems place multiple real cameras at different locations, and each camera records an individual view of the object; this configuration ensures high resolution and spatial consistency [37]. Pseudo-camera systems generate virtual cameras by temporal motion, mirror, or a prism, allowing more cost-effective realization [38]. The continued efforts to achieve 3D scene representation [39], calibration [40], stereo correspondence [29], and patchmatch-based depth estimation [29] have strongly advanced its development. On the other hand, multi-camera configurations also make camera grouping (selection) an important issue that is firmly related to measurement accuracy [29]. For accurate 3D shape measurement, several works have modelled it as an optimization problem aiming to achieve optimal camera grouping, based on properties estimated from 2D images, such as visibility, triangulation angle, incident angle, texture, depth, and normal vector [29].
Multi-camera DIC (MC-DIC), which integrates MCG with DIC, is famous for its strong capability for 3D DDM [41]. These methods effectively extend the measurement region by fusing results measured at different object areas [30]. MC-DIC with camera array systems enables wide-range measurements on panoramic and full-region dynamic deformations of column-shaped objects [42] and beam-shaped structures [43], respectively. MC-DIC with pseudo-camera systems supports dual-surface and panoramic measurements while maintaining system compactness and low cost [28]. However, these methods show weakness at camera grouping due to the presence of unavoidable errors in estimated properties [29]. Unlike MCG-based 3D shape measurement, which generally aims at large-scale scenes and shows error tolerance, MC-DIC is sensitive to these errors, as it typically targets small-scale dynamic deformations [28,29]. Nowadays, optimal camera grouping in MC-DIC remains a challenge and few methods have been proposed to target it; most MC-DIC methods typically use pre-paired cameras based on experience-guided manual configuration, which increases the complexity and effort required for system setup [28,30].
Recently, 3D models have been integrated into MC-DIC to represent object surfaces in 3D space for isogeometric analysis [44]; formats include polygon mesh, and non-uniform rational B-spline (NURBS) [45]. The rich prior spatial knowledge from 3D models enables the precise identification of camera visibility and facilitates accurate DDM by grouping visible cameras [46]. This integration has been successfully applied to efficient 3D displacement measurement in aeronautical composite structures [47], with measurement accuracy validated by high consistency with laser scans [48]. However, these methods adopt a simplistic strategy that groups all visible cameras for a given measurement point, which often results in the inclusion of cameras that yield poor measurements [49]. Such over-grouping can introduce ill-measured deformations caused by factors such as object–background discontinuity, self-occlusion, and reflective highlights. Overcoming this issue can significantly improve the robustness against cluttered backgrounds, complex object geometries, and environmental light variations.
To address this gap, we propose a novel MC-DIC method with pointwise-optimized model-based stereo pairing (MPMC-DIC), comprising model-based MC-DIC (MMC-DIC; an extended version of our previous model-based 3D-DIC [25]) and a pointwise-optimized model-based stereo pairing strategy (PMSP). By automatically evaluating multiple cameras and selecting the optimal camera pair for each measurement point on the 3D model based on evaluation factors derived from the 3D model and calibrated cameras, MPMC-DIC overcomes the over-grouping problem and achieves high-precision wide-range 3D DDM of semi-rigid objects. Our main contributions are summarized as follows:
(1)
A novel camera pair evaluation metric is proposed for pointwise-optimized model-based stereo pairing in 3D DDM tasks. Since each camera is evaluated individually prior to pair evaluation, the metric can also be applied to assess individual cameras for 2D-DIC.
(2)
An MC-DIC method with pointwise-optimized model-based stereo pairing is proposed. To the best of our knowledge, this is the first work dedicated to addressing camera pairing in MC-DIC, enhancing robustness against cluttered backgrounds and complex object geometries.
(3)
Experiments were conducted to validate the proposed MPMC-DIC method for 3D DDM, demonstrating micrometer-level accuracy and strong robustness against cluttered backgrounds and complex object geometries.
The paper is organized as follows: Section 2 describes our MPMC-DIC method for 3D displacement estimation based on a pre-measured 3D model. Section 3 validates our MPMC-DIC method of micrometer-level accuracy in measuring a centimeter-sized cylinder and robustness against cluttered backgrounds in comparison with the existing method which groups all visible cameras. Section 4 demonstrates the robustness of our MPMC-DIC method against complex geometries and illustrates its ability to precisely measure the vibrations of objects vibrating at audio frequencies by visualizing detailed vibrational characteristics.

2. Method

To achieve effective and efficient wide-range 3D DDM, this study proposes a novel MC-DIC method with pointwise-optimized model-based stereo pairing. Specifically, the proposed method introduces five evaluation factors to overcome the over-grouping problem that typically arises when only visibility is considered. The five evaluation factors derived from the 3D model and multiple cameras, along with their respective functions, are as follows:
  • Visibility, which determines the availability of cameras for measurement.
  • Subset validity rate, which reflects the subset’s coverage ratio with measurement object. A lower coverage ratio leads to a greater influence from the background.
  • Subset gradient, which reflects the depth inclination of measurement object relative to the camera in the subset region, especially depth discontinuities due to self-occlusion.
  • Subset ZNCC similarity (hereinafter referred to as subset similarity), which reflects the matching confidence of correlated subsets in pre- and post-deformation images.
  • Disparity, which reflects the angle between a pair of cameras relative to a measurement point. A small disparity often leads to high noise sensitivity, which consequently enlarges the error; a zero disparity disables 3D estimation.
Assuming multi-camera image sequences of a semi-rigid object and an associated reference 3D model, the proposed MPMC-DIC, illustrated in Figure 1, comprises MMC-DIC and PMSP. Following pipeline in [25] and integrating multiple cameras and visibility determination, MMC-DIC involves four steps: (a1) Camera calibration to ensure precise measurement. (a2) Projection and visibility determination to identify the spatial relationship between measurement points and cameras. (a3) Two-dimensional-DIC to obtain 2D displacements. (a4) Three-dimensional displacement estimation based on camera pairs selected by PMSP. Note that MMC-DIC can also be applied without PMSP by using manual pairing instead. By leveraging the five evaluation factors, PMSP enables automatic and reliable (b1) Individual camera evaluation; (b2) Camera pair evaluation and selection. This ensures robust and precise wide-range 3D DDM.
MPMC-DIC utilizes multi-view reference images I 0 c ( u , v ) and K multi-view measurement images I c ( u , v , t ) ( t = τ , , K τ ) captured by multiple cameras C c , where c = 1 , , n . This method assumes that the target object behaves as a semi-rigid body, with a predefined 3D model Ω serving as the reference framework. The 3D model contains N measurement points p i = ( x i , y i , z i ) ( i = 1 , , N ) with normal vectors n i . Below, we separately outline the detailed algorithm of MMC-DIC in Section 2.1 and PMSP in Section 2.2.

2.1. Model-Based MC-DIC (MMC-DIC)

The MMC-DIC algorithm is outlined in detail here, including steps (a1) to (a4), as shown in Figure 1.
(a1)
Camera Calibration
Calibration for cameras involves capturing multiple images with known patterns, including the 3D shape of the measurement object. For each camera, intrinsic parameter matrix M c and distortion parameter vector d c are determined.
(a2)
Projection and Visibility Determination
The poses of the cameras relative to the 3D model are pre-determined as the extrinsic parameter matrix R c and vector t c as follows:
R c | t c = reg I 0 c ( u , v ) , Ω ,
where reg ( · ) registers the 3D model and the reference image to determine extrinsic parameters.
The N measurement points on the 3D model are perspectively projected onto the 2D image planes of the cameras. For each camera, the projected points ( u i c , v i c ) are computed using the intrinsic, distortion, and extrinsic parameters:
u i c v i c = proj c ( p i ; M c , d c , R c , t c ) ,
where proj c ( · ) denotes the perspective projection.
The visibility of the cameras to each measurement point a i c { 0 , 1 } , as a strict guideline for cameras’ availability for camera pairing, is determined by following conditions:
  • The point’s 2D projection is outside the camera image area;
  • The point’s normal vector is opposite to the cameras’ orientation;
  • The point is occluded by the 3D model surfaces.
If any of the above factors apply, a i c = 0 ; otherwise, a i c = 1 . The c-th camera is considered for measuring the i-th measurement point only when a i c = 1 ; this consideration also encompasses its evaluation in PMSP.
(a3)
2D-DIC
Subsets in cameras Φ i c centered at the i-th projected measurement point are set for 2D-DIC. The 2D displacements, ( Δ u i c ( t ) , Δ v i c ( t ) ) , of the i-th projected measurement point are computed at time t for each camera via 2D-DIC by correlating the subset region of reference images in measurement images:
Δ u i c ( t ) Δ v i c ( t ) = DIC ( I c ( u , v , t ) , I 0 c ( u , v ) , Φ i c ) ,
where DIC ( · ) denotes the 2D-DIC function.
(a4)
Three-Dimensional Displacement Estimation
The N projected measurement points ( u i c , v i c ) are considered to be displaced to ( u i c ( t ) , v i c ( t ) ) at time t in the 2D images as follows:
u i c ( t ) v i c ( t ) = u i c v i c + Δ u i c ( t ) Δ v i c ( t ) .
Camera pairs of the N measurement points, { l i ( t ) , r i ( t ) } , are determined by PMSP, as outlined in Section 2.2. The 3D positions of the N measurement points at time t, p ˜ i ( t ) = x ˜ i ( t ) , y ˜ i ( t ) , z ˜ i ( t ) , are estimated via triangulation using their 2D position vectors from selected camera pairs as follows:
p ˜ i ( t ) = tri { u i c ( t ) , v i c ( t ) } ; { l i ( t ) , r i ( t ) } , M c , d c , R c , t c ,
where tri ( · ) denotes the triangulation function. The relative 3D displacement, Δ p i ( t ) = ( Δ x i ( t ) , Δ y i ( t ) , Δ z i ( t ) ) , is computed as the difference between the 3D coordinates of the measurement points:
Δ p i ( t ) = p ˜ i ( t ) p i .

2.2. Pointwise-Optimized Model-Based Stereo Pairing (PMSP)

The PMSP algorithm is outlined in detail here, including steps (b1) and (b2), as shown in Figure 1.
(b1)
Individual Camera Evaluation
Depth images I d c ( u , v ) are determined for each camera via 2D rendering of 3D model using intrinsic, distortion, and extrinsic parameters. Each pixel of a depth image records the depth distance of its perspectively corresponding 3D model point, or 0 when this pixel is not covered by the 3D model. Mask images I m c ( u , v ) define the coverage region of 3D model rendering as follows:
I m c ( u , v ) = 1 , I d c ( u , v ) > 0 , 0 , otherwise .
The 3D model rendering coverage ratios in subsets Φ i c are computed as subset validity rates V i c as follows:
V i c = 1 Φ i c ( u , v ) Φ i c I m c ( u , v ) ,
where Φ i c computes the pixel number in the subset region.
The inclination degree of the 3D model region projected in the subsets is computed as subset gradients G i c as follows:
G i c = ( u , v ) Φ i c I m c ( u , v ) max 3 × 3 ( I d c ( u , v ) ) min 3 × 3 ( I d c ( u , v ) ) 2 ( u , v ) Φ i c I m c ( u , v ) ,
where max 3 × 3 ( · ) and min 3 × 3 ( · ) are used to search for the maximum and minimum depth distance, respectively, in a 3 × 3 domain of ( u , v ) regardless of the pixels uncovered by the 3D model rendering.
The subset similarities, S i c ( t ) , of the i-th projected measurement point are computed at time t for each camera via 2D-DIC:
S i c ( t ) = DIC ( I c ( u , v , t ) , I 0 c ( u , v ) , Φ i c ) .
Note that DIC ( · ) here denotes the same processing as in Equation (3), and subset similarities are computed along with the 2D displacements.
Camera evaluation scores at time t, h i c ( t ) , are computed for each measurement point as follows:
h i c ( t ) = 1 2 f v ( V i c ) f g ( G i c ) + f s ( S i c ( t ) ) ,
where f v ( · ) is the evaluation function for the subset validity rate with a parameter α v (Figure 2a) as follows:
f v ( V ) = V α v 1 α v , α v V 1 , 0 , otherwise .
Although the subset validity rate is theoretically defined within the range ( 0 , 1 ] , it typically takes values significantly above 0 due to 3D model rendering coverage. Accordingly, its evaluation function f v ( · ) is defined as a piecewise function, which employs a monotonically increasing linear function from 0 to 1 over the interval [ α v , 1 ] , and a constant zero function otherwise, to enhance distinguishability while maintaining the linear relationship. α v is defined in the range [ 0 , 1 ) . In practice, it is selected close to the lower bound of the computed subset validity rate distribution, which typically lies around 0.5. f g ( · ) is the evaluation function for subset gradient with parameters α g and β g (Figure 2b) as follows:
f g ( G ) = logistic ( β g α g G ) ,
logistic ( x ) = 1 1 + e x .
To emphasize high evaluation scores for relatively small subset gradients G and low scores for relatively large ones, while ensuring a monotonically decreasing trend, and avoiding an abrupt change (e.g., a step function), the subset gradient evaluation function f g ( · ) is defined as a flipped and shifted logistic function. Logistic function logistic ( · ) is a commonly used S-shaped function in machine learning with smooth monotonically values within the range ( 0 , 1 ) [50]. logistic ( β g ) governs the maximum evaluation score at G = 0 . To provide high and distinguishable evaluation scores for small subset gradients, an empirically acceptable value range for β g is [ 4 , 8 ] . β g / α g defines the point at which the function decreases to 0.5; in other words, this ratio controls the evaluation of big subset gradients. In practice, α g is selected so that β g / α g lies around half of the upper bound of the computed subset gradient distribution. f s ( · ) is the evaluation function for subset similarity with a parameter α s (Figure 2c) as follows:
f s ( S ) = S α s 1 α s , α s S 1 , 0 , otherwise .
Similar to f v ( · ) , the subset similarity evaluation function f s ( · ) is defined as a piecewise function, which employs a monotonically increasing linear function from 0 to 1 over the interval [ α s , 1 ] , and a constant zero function otherwise, to enhance distinguishability, considering that most subset similarity values tend to be concentrated near to 1 in practical applications. α s is defined in the range [ 1 , 1 ) and, in practice, is selected close to the lower bound of the computed subset similarity distribution.
(b2)
Camera Pair Evaluation and Selection
Disparity between the l-th and the r-th cameras for the i-th measurement point, D i l r ( l , r = 1 , , n ) , is computed using extrinsic parameters as follows:
D i l r = arccos v i l · v i r v i l v i r ,
v i c = p i + R c t c ,
where v i c denotes the direction vector of the i-th measurement point relative to the c-th camera.
For a pair containing the l-th and r-th cameras, the camera pair evaluation score H i l r ( t ) is computed as follows:
H i l r ( t ) = f d ( D i l r ) h i l ( t ) h i r ( t ) ,
where f d ( · ) is the evaluation function for disparity with a parameter β d (Figure 2d) as follows:
f d ( D ) = 2 · logistic ( β d D ) 1 .
For a pair of cameras, a big disparity can effectively mitigate the influence of noise, while a small disparity often enlarges the error, and a zero disparity disables 3D estimation. To reduce the evaluation score of a camera pair with small disparity and guarantee a non-zero disparity value, the disparity evaluation function f d ( · ) is defined as a scaled and shifted logistic function. Through scaling and shifting, this logistic function ensures the evaluation value is 0 at D = 0 , and approaches 1 as D increases. Compared to monotonically linear function, it offers a steeper growth rate near D = 0 , followed by a progressively decreasing rate of increase, aligning with the fact that measurement noise sensitivity rises more rapidly as the disparity approaches 0. The parameter β d / 2 controls the maximum rate of increase in the initial phase; in other words, smaller β d encourages a stronger tendency toward bigger disparities, while larger β d encourages a weaker tendency. β d is selected according to the desired encouragement level under the given physical conditions. To ensure effective high and low evaluation scores for camera pairs with big and small disparities, respectively, an empirically acceptable value range for β d is [ 2 , 20 ] .
The camera pair with the highest evaluation score, { l i ( t ) , r i ( t ) } , is selected for each measurement point to estimate the 3D displacement:
{ l i ( t ) , r i ( t ) } = arg max l , r H i l r ( t ) .

3. Accuracy Verification

3.1. Experimental Setting

To assess the robustness of our proposed MPMC-DIC against cluttered backgrounds for wide-range 3D deformation measurement, we validated its accuracy in measuring the rigid displacements of a moving cylinder within a random-speckle background, as shown in Figure 3a, and compared the results with that obtained from the method considering only visibility (hereinafter referred to as the visibility-only method). A Ø5 × 5 cm cylinder (Figure 3b), painted with a random pattern and affixed with 7 mm circular markers as references, was connected to an XYZ stage fixed to the platform. Behind the cylinder, a 15 × 15 × 20 cm cuboid was fixed on the platform with its front surface as a part of the background. Printed speckle patterns were affixed to the platform and background. A high-precision 3D scanner (ATOS Compact Scan, ZEISS, Oberkochen, Germany) with 3-μm uncertainty, positioned 53 cm above the cylinder, measured displacements of five markers on the cylinder’s top surface, with their average as the reference for displacement. The 3D model, reconstructed by the ATOS Compact Scan as shown in Figure 3c,d, included 176 measurement points, consisting of 72 on the top surface (3 circles with 7 mm spacing × 24 in each circle with 15° spacing), and 104 on the front surface (8 semi-circles with 5.5 mm spacing × 13 in each semi-circle with 15° spacing). Eight cameras were arranged in a 2 × 4 curved grid to capture 8-bit Bayer images, covering the cylinder’s top surface and front 180° surface with around 30 cm spacing and measuring distance. Two PCs recorded these Bayer images using CoaXlink Quad CXP-12 frame grabbers (Euresys, Seraing, Belgium) and converted them to grayscale images via the OpenCV function, with each PC connected to four cameras and equipped with an i9-12900K CPU (Intel, Santa Clara, CA, USA), 64 GB RAM, RTX 3090 GPU (NVIDIA, Santa Clara, CA, USA), and Windows 11 Professional. Table 1 shows the optical system configurations.
To verify the accuracy of MPMC-DIC, the cylinder was moved along depth (z-) and horizontal (x-) direction in 11 steps (0.0–5.0 mm, around 0.5 mm per step), with images captured by eight cameras and reference displacement measured using the ATOS Compact Scan at each step. Figure 4 shows images captured at the initial position as reference images. Prior to measurement, cameras were calibrated using checkerboard and registered using circular marker points from the 3D model and 2D reference images. After capturing, recorded images were post-processed to measure displacements for accuracy verification, comparing MPMC-DIC with the visibility-only method. In both methods, the 2D-DIC method in the open-source 3D-DIC tool OpenCorr (version 1.0) [22] was used for 2D displacement measurement, with configurations as shown in Table 2. In MPMC-DIC, evaluation parameters for PMSP were set as shown in Table 3. The mean μ D and standard deviation σ D of the measured displacement D i ˜ over 176 measurement points are computed for error analysis. By computing with the reference displacement D ref and its uncertainty σ ref , the mean and standard deviation of error are defined as μ e = D ref μ D and σ e = ( σ ref 2 + σ D 2 ) 1 / 2 , respectively, and used as error indices.

3.2. Camera Pair Evaluation and Selection

Figure 5 shows the PMSP results at the 3.0 mm horizontal movement. For measurement points on the top surface of the cylinder, paired cameras included the first, second, third, and fourth cameras, while the fifth to eighth cameras were excluded from pairing due to lack of visibility; most of these points selected camera pairs with high disparities for measurement precision, such as the camera pair {1, 4}. All measurement points on the front surface paired two cameras oriented to them; this ensured visibility and prevented large object surface inclination and object–background discontinuity within the subsets for high-quality 2D measurement. Table 4 and Table 5 illustrate the PMSP process for an example measurement point on the front surface of the cylinder, as Figure 3c and Figure 5 circle, at 3.0 mm horizontal movement. In Table 4, each camera was first evaluated individually based on visibility and subset characteristics. As shown in Figure 4, the first, second, fifth, and sixth cameras were visible, while others were not; the subsets from the second and sixth cameras exhibited significant object–background discontinuity, with noticeable background speckles, and higher object surface inclination compared to those from the first and fifth cameras. As a result, the first and fifth cameras achieved high individual evaluation scores h exceeding 0.990, whereas the second and sixth cameras obtained lower values due to reduced subset validity rate V, similarity S, and increased subset gradient G; invisible cameras were excluded from the evaluation. In Table 5, the disparity D of each camera pair was then evaluated. Most visible camera pairs achieved evaluation scores f d ( D ) slightly above 0.9, except for pairs {1, 6} and {2, 5}, which obtained 0.982 and 0.976, respectively. Finally, in the camera pair evaluation stage, the camera pair {1, 5} achieved the highest evaluation score H of 0.888, owing to the high individual camera evaluation scores of the first and fifth cameras. This camera pair was therefore selected to measure the example point.

3.3. Comparison with Visibility-Only Method

Figure 6 compares MPMC-DIC with the visibility-only method for measuring the horizontal displacements of the cylinder. In Figure 6a, the visibility-only method exhibits increasing deviations with horizontal movements, particularly on the left and top–back sides of the cylinder. These deviations were attributed to over-grouped cameras whose subsets include object–background discontinuity. The presence of background speckles in these subsets led to tracking failures during cylinder movement. In contrast, MPMC-DIC achieved accurate measurements across the entirety of the 0.0–5.0 mm movements by selecting appropriate camera pairs, as demonstrated in Table 4 and Table 5. Quantitative evaluation in Figure 6b shows that the mean error μ e and standard deviation σ e of MPMC-DIC remain below 0.01 mm and 0.02 mm, respectively, whereas the visibility-only method shows rapidly increasing error, reaching a maximum μ e of 0.54 mm and σ e of 1.04 mm at 5.0 mm movement. A similar trend is observed in depth displacement measurement, as illustrated in Figure 7. MPMC-DIC consistently maintains accuracy with μ e and σ e below 0.01 mm and 0.02 mm, respectively, throughout the 0.0–5.0 mm movements, while the visibility-only method exhibits increasing deviations, particularly on the front side due to object–background discontinuity within the subsets from the first, fourth, fifth, and eighth cameras, with a maximum μ e of 0.47 mm and σ e of 0.80 mm.

3.4. Verification on Robustness to Digital Speckle Pattern

To verify the robustness of our proposed MPMC-DIC to different speckle types, we also measured the displacements of a planar object with a digital speckle pattern and compared the results with those obtained from the visibility-only method. The 3 × 15 cm planar object was part of the front surface of the cuboid behind the cylinder, as shown in Figure 8a. Its 3D model is shown in Figure 8b, which contains 5 × 35 measurement points evenly distributed at horizontal and vertical spacings of 3.75 mm and 3.95 mm, respectively. The same images, experimental configurations, and error metric described in Section 3.1 were adopted for this measurement and analysis. For the error analysis, the reference displacement was set to zero, since the cuboid remained stationary during image acquisition.
Similar to the measurements of cylinder displacements (Figure 6 and Figure 7), MPMC-DIC also outperforms the visibility-only method in measuring planar object displacements (Figure 9), as the visibility-only method was affected by subsets of over-grouped cameras which contained the moving cylinder. Quantitative evaluation at both horizontal and depth cylinder movements shows that the mean error μ e and standard deviation σ e of MPMC-DIC remain below 0.03 mm and 0.02 mm, respectively. In contrast, the visibility-only method exhibits increasing errors, with a maximum μ e of 0.11 mm and σ e of 0.37 mm at 5.0 mm horizontal cylinder movement, and a maximum μ e of 0.22 mm and σ e of 0.78 mm at 5.0 mm depth cylinder movement.

3.5. Computational Efficiency Evaluation

To evaluate the computational efficiency of MPMC-DIC relative to the visibility-only method, we recorded the execution time required to measure cylinder displacements under the same conditions and using the same recording PC as described in Section 3.1. The time taken to compute the 3D displacements for one frame of 176 measurement points is summarized in Table 6. When executed sequentially on the CPU, the visibility-only method required 3816.70 s for processing, whereas MPMC-DIC took 30,906.56 s, with camera pairing incurring an additional execution time about seven times that of the visibility-only method. Nevertheless, the PVD and DID procedures dominated the computation for 3D-model-element-wise and pixel-wise operations, making them well-suited for parallel processing. With GPU acceleration, the execution times of PVD and DID were significantly reduced to 167.81 s and 313.47 s, respectively. Consequently, the total execution times of MPMC-DIC and the visibility-only method were reduced to 633.03 s and 312.24 s, respectively, bringing the additional execution time of MPMC-DIC down to nearly the same level as that of the visibility-only method.
In conclusion, unlike the visibility-only method, which loses accuracy with over-grouped cameras, MPMC-DIC maintains accurate measurements by selecting camera pairs with high evaluation scores, requiring only an additional execution time equivalent to that of the visibility-only method to exclude the influence of subsets with object–background discontinuity. These results demonstrate the robustness of our proposed MPMC-DIC against cluttered backgrounds in wide-range 3D deformation measurement.

4. Vibration Measurement on PC Speaker

To assess the robustness of our proposed MPMC-DIC against complex object geometries in wide-range 3D DDM, we applied it to measure panoramic vibrations of a 9 × 15 × 16 cm PC speaker featuring a recessed membrane, as shown in Figure 10a, and visually compared it with the visibility-only method. The speaker (Figure 10b) was painted with random pattern and affixed with 7 mm circular markers as references. Its 3D model (Figure 10c,d) was reconstructed by the ATOS Compact Scan, including 32,055 measurement points. The same cameras and PCs as described in Section 3, as well as the same calibration and registration methods, were utilized in the vibration measurement for image capturing, recording, and processing. The eight cameras surrounding the speaker 99 cm away captured 1920 × 1080-pixel reference and measurement images with a 2.3 ms exposure; 400 fps measurement images were captured for 0.5 s when the PC speaker played 50 Hz audio. Figure 11 shows the reference images, where severe self-occlusion can be observed at the membrane part in the second and fifth camera images. Measurement image series were utilized to measure vibration displacements using MPMC-DIC and the visibility-only method based on similar configurations to the accuracy verification: the 2D-DIC method in OpenCorr was performed for 2D measurement using 129 × 129-pixel subsets in both methods, and camera pairs were evaluated and selected using PMSP with the evaluation parameters shown in Table 7. Since the speaker was stable throughout the image capturing process, PMSP was applied solely to the first frame in this experiment, with the subsequent frames using the same camera pair selection. The eight-point moving average component was removed from the measured vibration displacements to suppress the artifacts caused by camera self-motion. The peak-to-peak value of η -direction displacements is calculated as max ( Δ η ( t ) ) min ( Δ η ( t ) ) , where η = x , y , z . The highest value among three directions is defined as the peak-to-peak vibration value.
Figure 12 visualizes PMSP results. Most measurement points on the front, back, and lateral surfaces paired adjacent cameras which were oriented to these points, as these cameras ensured visibility, along with high evaluation scores in terms of the subset validity rate and gradient. An exception occurred at a few measurement points located on the lateral surfaces close to the membrane region, which selected camera pair {1, 3} or {4, 6}. This resulted from the fact that the second/fifth camera provided low evaluation scores due to severe self-occlusion in the subset region, while other cameras lacked visibility. Regarding measurement points on the top surface, the visibility and similar subset conditions of all cameras allowed them to select a reliable camera pair with higher disparity; for example, some points selected the camera pair {1, 6} or {6, 8}. For measurement points located on the membrane region, the third and fourth cameras were paired, with the first, sixth, seventh, and eighth cameras excluded due to lack of visibility. Although the second and fifth cameras were also visible to part of these measurement points, they were excluded from pairing due to low camera pair evaluation scores resulting from severe self-occlusion, as illustrated in Figure 11.
Based on the PMSP, our MPMC-DIC achieved accurate 3D vibration measurements, as shown in Figure 13a, revealing a circular vibration distribution centered on the membrane. The vibration amplitude reaches its peak at the center and gradually diminishes toward the edges. In contrast, the visibility-only method, as shown in Figure 13b, exhibits substantial deviations due to over-grouping of all visible cameras. Ill-measured deformations from the second and fifth cameras led to remarkable errors. As for the measurements on the speaker housing, Figure 14 presents the periodic vibration measured using MPMC-DIC. Circular vibration distributions excited by the 50 Hz audio are observed at the left and right lateral surfaces. Due to measurement biases among cameras, minor spatial discontinuities in the measured deformations are observed along the boundary where pairing alters, like the transition region between the top and left lateral surfaces at 7.5 ms. Nevertheless, future work is expected to mitigate these discontinuities by grouping a larger number of cameras and adopting weighted triangulation.
Figure 15 presents the 3D vibrations of three points ( p 1 , p 2 , and p 3 ) located on the speaker membrane. Corresponding to the distribution shown in Figure 13a, these points exhibit gradually decreasing peak-to-peak vibration values of 1.252, 1.032, and 0.598 mm, respectively. Their z-direction frequency amplitude spectra reveal a prominent peak at 50 Hz, aligned with the speaker’s operating frequency, as well as harmonic peaks at 100 and 150 Hz. Figure 16 shows the 3D vibrations of eight points located on the speaker housing, with respective peak-to-peak vibration values of 0.009, 0.011, 0.012, 0.018, 0.013, 0.013, 0.012, and 0.009 mm. Although the vibration amplitudes of these points are significantly smaller than those on the membrane, their frequency responses exhibit a similar pattern, with maximum amplitudes at 50 Hz and harmonic peaks at 100 and 150 Hz. Across all 11 points, the frequency amplitude spectra reveal that the amplitudes at 150 Hz are slightly or significantly greater than those at 100 Hz. This indicates the presence of a mechanical resonance in the PC speaker structure at 150 Hz, in addition to the harmonic components.
These results demonstrate that our proposed MPMC-DIC enables accurate wide-range 3D DDM by selecting camera pairs with high evaluation scores, effectively handling the self-occlusions caused by complex geometries and minimizing the deviations observed in the visibility-only method, thereby confirming its robustness.

5. Conclusions

In this study, we propose a novel MPMC-DIC method for wide-range 3D DDM of semi-rigid objects. The proposed method enables automatic camera pair evaluation and selection, filtering out cameras which potentially provide ill-measured deformations, thus ensuring accurate measurements. Experiments which measure cylinder rigid displacements and PC speaker vibrations validate its accuracy, along with robustness against cluttered backgrounds and complex object geometries in comparison with the visibility-only method. Visual and quantitative evaluations demonstrate large deviations with a maximum mean error of 0.54 mm for the visibility-only method, whereas MPMC-DIC maintains a mean error below 0.03 mm with an additional execution time approximately equivalent to that of the visibility-only method. Minor spatial discontinuities are observed in the measured deformations along boundaries where camera pairing changes, due to biases among cameras; this issue is expected to be mitigated in future work by grouping multiple cameras. Although this has yet to be proven, MPMC-DIC may enhance robustness in more practical situations, such as environments with light variations, based on its capability to pair reliable cameras.
Being a newly proposed technique, MPMC-DIC demonstrates considerable promise to applications like in situ monitoring with uncontrollable backgrounds, assessment of products with complex geometries, and measurement in environments with reflective highlights. Our work will further demonstrate its practical performance and continue to extend its measurement capabilities, including local strain measurement. Deeper investigations in the future will be strongly beneficial to develop this method into a highly effective and impactful tool, particularly with regard to the following aspects:
  • Evaluation factors and functions: The five evaluation factors were selected based on experience, and evaluation functions were designed based on the requirement for specific factor evaluation, but they have not been comprehensively evaluated to be strictly limited as described in this paper. More forms of evaluation factors or functions are encouraged to be applied in expectation of a better distinguishment between reliable and ill-conditioned camera pairs.
  • Automatic parameter determination: The evaluation parameters for PMSP in the study were artificially assigned. Future research on automatic parameter determination can further simplify the system setup, e.g., by integrating with machine learning or deep learning.
  • Multiple-camera grouping: Although cameras were paired into stereo subsystems in this study, a subsystem is able to involve more cameras, and this extension is believed to enable higher-accuracy 3D DDM with weighted triangulation.
  • Active camera positioning: Our evaluation metric was applied for camera pairing based on static camera positions. In the future, it is also possible that it could be utilized for active camera positioning, or for both of these at the same time.

Author Contributions

Conceptualization, I.I.; methodology, W.Q.; software, W.Q. and S.H.; validation, W.Q., S.H. and I.I.; formal analysis, W.Q. and I.I.; investigation, W.Q. and F.W.; resources, K.S. and I.I.; data curation, W.Q. and K.S.; writing—original draft preparation, W.Q.; writing—review and editing, W.Q., F.W., S.H. and I.I.; visualization, W.Q.; supervision, I.I.; project administration, I.I.; funding acquisition, W.Q. and I.I. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by JST A-STEP grant number JPMJTR231A and in part by JST SPRING grant number JPMJSP2132.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Sun, X.; Ilanko, S.; Mochida, Y.; Tighe, R.C. A Review on Vibration-Based Damage Detection Methods for Civil Structures. Vibration 2023, 6, 843–875. [Google Scholar] [CrossRef]
  2. Peng, C.; Gao, H.; Liu, X.; Liu, B. A visual vibration characterization method for intelligent fault diagnosis of rotating machinery. Mech. Syst. Signal Proc. 2023, 192, 110229. [Google Scholar] [CrossRef]
  3. Wang, Q.; Zeng, J.; Wei, L.; Dong, H.; Shi, Y. Experimental and numerical investigation on multi-module coupling vibration performance of light rail vehicle. J. Vib. Eng. Technol. 2024, 12, 745–756. [Google Scholar] [CrossRef]
  4. Nhung, N.T.C.; Nguyen, H.Q.; Huyen, D.T.; Nguyen, D.B.; Quang, M.T. Development and application of linear variable differential transformer (LVDT) sensors for the structural health monitoring of an urban railway bridge in Vietnam. Eng. Technol. Appl. Sci. Res. 2023, 13, 11622–11627. [Google Scholar] [CrossRef]
  5. Jing, C.; Huang, G.; Li, X.; Zhang, Q.; Yang, H.; Zhang, K.; Liu, G. GNSS/accelerometer integrated deformation monitoring algorithm based on sensors adaptive noise modeling. Measurement 2023, 218, 113179. [Google Scholar] [CrossRef]
  6. Li, T.; Wu, D.; Khyam, M.O.; Guo, J.; Tan, Y.; Zhou, Z. Recent advances and tendencies regarding fiber optic sensors for deformation measurement: A review. IEEE Sens. J. 2021, 22, 2962–2973. [Google Scholar] [CrossRef]
  7. Garg, P.; Moreu, F.; Ozdagli, A.; Taha, M.R.; Mascareñas, D. Noncontact dynamic displacement measurement of structures using a moving laser Doppler vibrometer. J. Bridge Eng. 2019, 24, 04019089. [Google Scholar] [CrossRef]
  8. Wang, F.; Shimasaki, K.; Ishii, I. HFR-Video-Based Chatter Monitoring Synchronized with Tool Rotation. IEEE Trans. Instrum. Meas. 2025, 74, 5033912. [Google Scholar] [CrossRef]
  9. Azimbeik, K.; Mahdavi, S.H.; Rofooei, F.R. Improved image-based, full-field structural displacement measurement using template matching and camera calibration methods. Measurement 2023, 211, 112650. [Google Scholar] [CrossRef]
  10. Jones, E.M.C.; Iadicola, M.A. A good practices guide for digital image correlation. Int. Digit. Image Correl. Soc. 2018. [Google Scholar] [CrossRef]
  11. Peters, W.; Ranson, W. Digital imaging techniques in experimental stress analysis. Opt. Eng. 1982, 21, 427–431. [Google Scholar] [CrossRef]
  12. Lu, H.; Cary, P. Deformation measurements by digital image correlation: Implementation of a second-order displacement gradient. Exp. Mech. 2000, 40, 393–400. [Google Scholar] [CrossRef]
  13. Pan, B.; Qian, K.; Xie, H.; Asundi, A. Two-dimensional digital image correlation for in-plane displacement and strain measurement: A review. Meas. Sci. Technol. 2009, 20, 062001. [Google Scholar] [CrossRef]
  14. Pan, B. Bias error reduction of digital image correlation using Gaussian pre-filtering. Opt. Lasers Eng. 2013, 51, 1161–1167. [Google Scholar] [CrossRef]
  15. Gao, Y.; Cheng, T.; Su, Y.; Xu, X.; Zhang, Y.; Zhang, Q. High-efficiency and high-accuracy digital image correlation for three-dimensional measurement. Opt. Lasers Eng. 2015, 65, 73–80. [Google Scholar] [CrossRef]
  16. Zou, X.; Pan, B. Full-automatic seed point selection and initialization for digital image correlation robust to large rotation and deformation. Opt. Lasers Eng. 2021, 138, 106432. [Google Scholar] [CrossRef]
  17. Luo, P.; Chao, Y.; Sutton, M.; Peters, W.H. Accurate measurement of three-dimensional deformations in deformable and rigid bodies using computer vision. Exp. Mech. 1993, 33, 123–132. [Google Scholar] [CrossRef]
  18. Salvi, J.; Armangué, X.; Batlle, J. A comparative review of camera calibrating methods with accuracy evaluation. Pattern Recognit. 2002, 35, 1617–1635. [Google Scholar] [CrossRef]
  19. Hirschmuller, H.; Scharstein, D. Evaluation of cost functions for stereo matching. In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Minneapolis, MN, USA, 17–22 June 2007; pp. 1–8. [Google Scholar]
  20. Hartley, R.I.; Sturm, P. Triangulation. Comput. Vis. Image Underst. 1997, 68, 146–157. [Google Scholar] [CrossRef]
  21. Su, Z.; Lu, L.; Dong, S.; Yang, F.; He, X. Auto-calibration and real-time external parameter correction for stereo digital image correlation. Opt. Lasers Eng. 2019, 121, 46–53. [Google Scholar] [CrossRef]
  22. Jiang, Z. OpenCorr: An open source library for research and development of digital image correlation. Opt. Lasers Eng. 2023, 165, 107566. [Google Scholar] [CrossRef]
  23. Solav, D.; Silverstein, A. DuoDIC: 3D digital image correlation in MATLAB. J. Open Source Softw. 2022, 7, 4279. [Google Scholar] [CrossRef]
  24. Wu, Y.; Chen, L.; Ge, G.; Tang, Y.; Wang, W. Enhance stereo-DIC in low quality speckle pattern by cross-scale stereo matching with application in dental crack detection. Opt. Lasers Eng. 2025, 186, 108770. [Google Scholar] [CrossRef]
  25. Qin, W.; Wang, F.; Hu, S.; Shimasaki, K.; Ishii, I. Model-Based 3-D Vibration Measurement Using Stereo High-Frame-Rate Images. IEEE Trans. Instrum. Meas. 2025, 74, 5040812. [Google Scholar] [CrossRef]
  26. Feng, Y.; Wang, L. Stereo-DICNet: An efficient and unified speckle matching network for stereo digital image correlation measurement. Opt. Lasers Eng. 2024, 179, 108267. [Google Scholar] [CrossRef]
  27. Wang, L.; Lei, Z. Deep learning based speckle image super-resolution for digital image correlation measurement. Opt. Laser Technol. 2025, 181, 111746. [Google Scholar] [CrossRef]
  28. Chen, Z.; Li, X.; Li, H. Multiple-view 3D digital image correlation based on pseudo-overlapped imaging. Opt. Lett. 2024, 49, 3733–3736. [Google Scholar] [CrossRef]
  29. Huang, Z.; Shi, Y.; Gong, M. Visibility-Aware Pixelwise View Selection for Multi-View Stereo Matching. In Proceedings of the International Conference on Pattern Recognition (ICPR), Kolkata, India, 1–5 December 2024; Springer: Cham, Switzerland, 2025; pp. 130–144. [Google Scholar]
  30. Ge, P.; Wang, Y.; Zhou, J.; Wang, B. Point cloud optimization of multi-view images in digital image correlation system. Opt. Lasers Eng. 2024, 173, 107931. [Google Scholar] [CrossRef]
  31. Li, J.; Shimasaki, K.; Wang, F.; Ishii, I. Point Cloud-Based 3D Tracking for Asynchronous and Uncalibrated Multi-Camera Systems. IEEE Sens. Lett. 2025, 9, 3503904. [Google Scholar] [CrossRef]
  32. Xu, D.; Zheng, T.; Zhang, Y.; Yang, X.; Fu, W. Multi-person 3D pose estimation from multi-view without extrinsic camera parameters. Expert Syst. Appl. 2025, 266, 126114. [Google Scholar] [CrossRef]
  33. Ariya, P.; Wongwan, N.; Worragin, P.; Intawong, K.; Puritat, K. Immersive realities in museums: Evaluating the impact of VR, VR360, and MR on visitor presence, engagement and motivation. Virtual Real. 2025, 29, 117. [Google Scholar] [CrossRef]
  34. Saikia, A.; Di Vece, C.; Bonilla, S.; He, C.; Magbagbeola, M.; Mennillo, L.; Czempiel, T.; Bano, S.; Stoyanov, D. Robotic Arm Platform for Multi-View Image Acquisition and 3D Reconstruction in Minimally Invasive Surgery. IEEE Robot. Autom. Lett. 2025, 10, 3174–3181. [Google Scholar] [CrossRef]
  35. Wu, T.; Hou, S.; Sun, W.; Shi, J.; Yang, F.; Zhang, J.; Wu, G.; He, X. Visual measurement method for three-dimensional shape of underwater bridge piers considering multirefraction correction. Autom. Constr. 2023, 146, 104706. [Google Scholar] [CrossRef]
  36. Ospina-Bohorquez, A.; Del Pozo, S.; Courtenay, L.A.; González-Aguilera, D. Handheld stereo photogrammetry applied to crime scene analysis. Measurement 2023, 216, 112861. [Google Scholar] [CrossRef]
  37. Wu, H.; Dong, Y.; Chen, X.; Xi, J. A novel method for high dynamic range optical measurement with single shot by multi-view stereo. Opt. Laser Technol. 2025, 189, 112930. [Google Scholar] [CrossRef]
  38. Li, Q.; Hu, S.; Shimasaki, K.; Ishii, I. An Ultrafast Multi-object Zooming System Based on Low-latency Stereo Correspondence. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Abu Dhabi, United Arab Emirates, 14–18 October 2024; pp. 11552–11557. [Google Scholar]
  39. Zhang, C.; Tong, J.; Lin, T.J.; Nguyen, C.; Li, H. PMVC: Promoting multi-view consistency for 3D scene reconstruction. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 3–8 January 2024; pp. 3678–3688. [Google Scholar]
  40. Li, Q.; Chen, M.; Gu, Q.; Ishii, I. A flexible calibration algorithm for high-speed bionic vision system based on galvanometer. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022; pp. 4222–4227. [Google Scholar]
  41. Xie, R.; Chen, B.; Pan, B. Mirror-assisted multi-view high-speed digital image correlation for dual-surface dynamic deformation measurement. Sci. China Technol. Sci. 2023, 66, 807–820. [Google Scholar] [CrossRef]
  42. Solav, D.; Moerman, K.M.; Jaeger, A.M.; Herr, H.M. A framework for measuring the time-varying shape and full-field deformation of residual limbs using 3-D digital image correlation. IEEE Trans. Biomed. Eng. 2019, 66, 2740–2752. [Google Scholar] [CrossRef] [PubMed]
  43. Dai, Y.; Li, H. Multi-Camera digital image correlation in deformation measurement of civil components with large slenderness ratio and large curvature. Materials 2022, 15, 6281. [Google Scholar] [CrossRef] [PubMed]
  44. Chang, X.; Le Gourriérec, C.; Turpin, L.; Berny, M.; Hild, F.; Roux, S. Proper Generalized Decomposition stereocorrelation to measure kinematic fields for high speed impact on laminated glass. Comput. Meth. Appl. Mech. Eng. 2023, 415, 116217. [Google Scholar] [CrossRef]
  45. Gupta, V.; Jameel, A.; Verma, S.K.; Anand, S.; Anand, Y. An insight on NURBS based isogeometric analysis, its current status and involvement in mechanical applications. Arch. Comput. Method Eng. 2023, 30, 1187–1230. [Google Scholar] [CrossRef]
  46. Fouque, R.; Bouclier, R.; Passieux, J.C.; Périé, J.N. Photometric DIC: A unified framework for global Stereo Digital Image Correlation based on the construction of textured digital twins. J. Theor. Comput. Appl. Mech. 2022. [Google Scholar] [CrossRef]
  47. Serra, J.; Pierré, J.E.; Passieux, J.C.; Périé, J.N.; Bouvet, C.; Castanié, B. Validation and modeling of aeronautical composite structures subjected to combined loadings: The VERTEX project. Part 1: Experimental setup, FE-DIC instrumentation and procedures. Compos. Struct. 2017, 179, 224–244. [Google Scholar] [CrossRef]
  48. Chapelier, M.; Bouclier, R.; Passieux, J.C. Free-Form Deformation Digital Image Correlation (FFD-DIC): A non-invasive spline regularization for arbitrary finite element measurements. Comput. Meth. Appl. Mech. Eng. 2021, 384, 113992. [Google Scholar] [CrossRef]
  49. Genovese, K.; Badel, P.; Cavinato, C.; Pierrat, B.; Bersi, M.; Avril, S.; Humphrey, J. Multi-view digital image correlation systems for in vitro testing of arteries from mice to humans. Exp. Mech. 2021, 61, 1455–1472. [Google Scholar] [CrossRef] [PubMed]
  50. Gui, B.; Bhardwaj, A.; Sam, L. Simulating urban expansion as a nonlinear constrained evolution process: A hybrid logistic–Monte Carlo cellular automata framework. Chaos Solitons Fractals 2025, 199, 116938. [Google Scholar] [CrossRef]
Figure 1. Processing flow of the multi-camera (MC) digital image correlation (DIC) method with the pointwise-optimized model-based stereo pairing (MPMC-DIC) in this study: (a) Model-based MC-DIC (MMC-DIC), which involves steps (a1)–(a4). (b) Pointwise-optimized model-based stereo pairing strategy (PMSP), which involves steps (b1) and (b2).
Figure 1. Processing flow of the multi-camera (MC) digital image correlation (DIC) method with the pointwise-optimized model-based stereo pairing (MPMC-DIC) in this study: (a) Model-based MC-DIC (MMC-DIC), which involves steps (a1)–(a4). (b) Pointwise-optimized model-based stereo pairing strategy (PMSP), which involves steps (b1) and (b2).
Sensors 25 05675 g001
Figure 2. Evaluation functions for (a) The subset validity rate. (b) The subset gradient. (c) The subset similarity. (d) The disparity.
Figure 2. Evaluation functions for (a) The subset validity rate. (b) The subset gradient. (c) The subset similarity. (d) The disparity.
Sensors 25 05675 g002
Figure 3. Experimental setup of accuracy verification: (a) The experimental scene. (b) The cylinder to be observed. (c,d) A three-dimensional model with 176 measurement points (blue). The green point is located at the upper-front part of the cylinder for alignment across figures. The vermilion circle highlights the measurement point selected for PMSP demonstration (unit: cm).
Figure 3. Experimental setup of accuracy verification: (a) The experimental scene. (b) The cylinder to be observed. (c,d) A three-dimensional model with 176 measurement points (blue). The green point is located at the upper-front part of the cylinder for alignment across figures. The vermilion circle highlights the measurement point selected for PMSP demonstration (unit: cm).
Sensors 25 05675 g003
Figure 4. Images captured by eight cameras at the initial position for accuracy verification. C 1 C 8 indicate the first to eighth cameras. Blue points and orange squares indicate the projections and 2D-DIC subsets, respectively, of one measurement point as circled in Figure 3c in each visible camera image.
Figure 4. Images captured by eight cameras at the initial position for accuracy verification. C 1 C 8 indicate the first to eighth cameras. Blue points and orange squares indicate the projections and 2D-DIC subsets, respectively, of one measurement point as circled in Figure 3c in each visible camera image.
Sensors 25 05675 g004
Figure 5. PMSP results for 176 measurement points at 3.0 mm horizontal movement in accuracy verification: (a) The first paired camera. (b) The second paired camera. The vermilion circle highlights the same measurement point as in Figure 3c selected for PMSP demonstration.
Figure 5. PMSP results for 176 measurement points at 3.0 mm horizontal movement in accuracy verification: (a) The first paired camera. (b) The second paired camera. The vermilion circle highlights the same measurement point as in Figure 3c selected for PMSP demonstration.
Sensors 25 05675 g005
Figure 6. Comparison of displacements measured using MPMC-DIC (blue) and the visibility-only method (red) at different horizontal movements in accuracy verification: (a) Displaced 3D position distributions of the 176 measurement points. (b) Mean errors μ e and standard deviations σ e .
Figure 6. Comparison of displacements measured using MPMC-DIC (blue) and the visibility-only method (red) at different horizontal movements in accuracy verification: (a) Displaced 3D position distributions of the 176 measurement points. (b) Mean errors μ e and standard deviations σ e .
Sensors 25 05675 g006
Figure 7. Comparison of displacements measured using MPMC-DIC (blue) and visibility-only method (red) at different depth movements for accuracy verification: (a) Displaced 3D position distributions of the 176 measurement points. (b) Mean errors μ e and standard deviations σ e .
Figure 7. Comparison of displacements measured using MPMC-DIC (blue) and visibility-only method (red) at different depth movements for accuracy verification: (a) Displaced 3D position distributions of the 176 measurement points. (b) Mean errors μ e and standard deviations σ e .
Sensors 25 05675 g007
Figure 8. Measurement object in the verification on robustness to digital speckle pattern: (a) Planar (vermilion rectangular) object to be observed. (b) Its 3D model with 175 evenly assigned (5 × 35) measurement points (blue). The cylinder is considered as background in the measurement (unit: cm).
Figure 8. Measurement object in the verification on robustness to digital speckle pattern: (a) Planar (vermilion rectangular) object to be observed. (b) Its 3D model with 175 evenly assigned (5 × 35) measurement points (blue). The cylinder is considered as background in the measurement (unit: cm).
Sensors 25 05675 g008
Figure 9. Quantitative comparison of planar object displacements measured using MPMC-DIC (blue) and the visibility-only method (red) in the verification of robustness to digital speckle pattern: mean errors μ e and standard deviations σ e (a) at different horizontal cylinder movements; (b) at different depth cylinder movements.
Figure 9. Quantitative comparison of planar object displacements measured using MPMC-DIC (blue) and the visibility-only method (red) in the verification of robustness to digital speckle pattern: mean errors μ e and standard deviations σ e (a) at different horizontal cylinder movements; (b) at different depth cylinder movements.
Sensors 25 05675 g009
Figure 10. Experimental setup of vibration measurement: (a) Experimental scene. (b) PC speaker to be observed. (c,d) Three-dimensional model with 32,055 measurement points across its surface (unit: cm).
Figure 10. Experimental setup of vibration measurement: (a) Experimental scene. (b) PC speaker to be observed. (c,d) Three-dimensional model with 32,055 measurement points across its surface (unit: cm).
Sensors 25 05675 g010
Figure 11. Reference images captured by eight cameras in vibration measurement. C 1 C 8 indicate the first to eighth cameras. The orange square in the first image shows the 129 × 129-pixel subset size.
Figure 11. Reference images captured by eight cameras in vibration measurement. C 1 C 8 indicate the first to eighth cameras. The orange square in the first image shows the 129 × 129-pixel subset size.
Sensors 25 05675 g011
Figure 12. Visualization of the 12 most frequently selected camera pairs with different colors for the first frame processed for vibration measurement. Black regions indicate measurement points which selected other camera pairs or were not measurable due to lack of visibility. In total, 11 of the 32,055 measurement points used for vibration analysis are marked on the 3D model.
Figure 12. Visualization of the 12 most frequently selected camera pairs with different colors for the first frame processed for vibration measurement. Black regions indicate measurement points which selected other camera pairs or were not measurable due to lack of visibility. In total, 11 of the 32,055 measurement points used for vibration analysis are marked on the 3D model.
Sensors 25 05675 g012
Figure 13. Time-varying 3D shapes of the speaker’s front surface over 20 ms (one period of 50 Hz audio), placing emphasis on the membrane vibration measured by different methods: (a) MPMC-DIC; (b) visibility-only method. Deformations are visually magnified 10 times in geometry, and enhanced by colormap based on the actual displacement magnitude ( Δ p ( t ) ).
Figure 13. Time-varying 3D shapes of the speaker’s front surface over 20 ms (one period of 50 Hz audio), placing emphasis on the membrane vibration measured by different methods: (a) MPMC-DIC; (b) visibility-only method. Deformations are visually magnified 10 times in geometry, and enhanced by colormap based on the actual displacement magnitude ( Δ p ( t ) ).
Sensors 25 05675 g013
Figure 14. Time-varying 3D shapes of the speaker over 20 ms (one period of 50 Hz audio), placing emphasis on the housing vibration measured by MPMC-DIC. Deformations are visually magnified 10 times in geometry, and enhanced by colormap based on the actual displacement magnitude ( Δ p ( t ) ).
Figure 14. Time-varying 3D shapes of the speaker over 20 ms (one period of 50 Hz audio), placing emphasis on the housing vibration measured by MPMC-DIC. Deformations are visually magnified 10 times in geometry, and enhanced by colormap based on the actual displacement magnitude ( Δ p ( t ) ).
Sensors 25 05675 g014
Figure 15. Three-dimensional vibrations measured at three measurement points at the membrane of the speaker ( p 1 3 as illustrated in Figure 12) using MPMC-DIC: (a) Three-dimensional displacements over 0.2 s; (b) frequency–amplitude spectra derived from 0.5-s z-displacements, where a prominent peak at 50 Hz, as well as harmonic peaks at 100 and 150 Hz are identified. p 1 and p 3 are located at the center and edge of the membrane, respectively, and p 2 is at the middle of them.
Figure 15. Three-dimensional vibrations measured at three measurement points at the membrane of the speaker ( p 1 3 as illustrated in Figure 12) using MPMC-DIC: (a) Three-dimensional displacements over 0.2 s; (b) frequency–amplitude spectra derived from 0.5-s z-displacements, where a prominent peak at 50 Hz, as well as harmonic peaks at 100 and 150 Hz are identified. p 1 and p 3 are located at the center and edge of the membrane, respectively, and p 2 is at the middle of them.
Sensors 25 05675 g015
Figure 16. Three-dimensional vibrations measured at eight measurement points on the back, top, and lateral surfaces of the speaker ( p 4 11 as illustrated in Figure 12) using MPMC-DIC: (a) three-dimensional displacements over 0.2 s; (b) frequency–amplitude spectra derived from 0.5-s z-displacements, where a prominent peak at 50 Hz, as well as harmonic peaks at 100 and 150 Hz are identified.
Figure 16. Three-dimensional vibrations measured at eight measurement points on the back, top, and lateral surfaces of the speaker ( p 4 11 as illustrated in Figure 12) using MPMC-DIC: (a) three-dimensional displacements over 0.2 s; (b) frequency–amplitude spectra derived from 0.5-s z-displacements, where a prominent peak at 50 Hz, as well as harmonic peaks at 100 and 150 Hz are identified.
Sensors 25 05675 g016
Table 1. Optical system configurations.
Table 1. Optical system configurations.
CameraEosens 2.0CXP2 HFR camera (Mikrotron, Unterschleissheim, Germany)
Resolution1920 × 1080 pixels
Pixel size10 μm
LensCREATOR F2 lens (Zhong Yi Optics, Shenyang, China)
Focal length35 mm
Aperturef/16
Stand-off distance (SOD) 1273.03 mm
Stereo angle 243°
Field-of-view (FOV) 3149.78 × 84.25 mm
Image scale 312.82 pixel/mm
1 The SOD was calculated as the mean distance of the eight cameras from the cylinder. 2 The stereo angle was calculated as the mean angle between adjacent cameras. 3 The FOV and image scale were theoretically derived based on the SOD. The actual FOV is presented in Figure 4, and the actual image scale may vary slightly depending on the measurement position.
Table 2. Two-dimensional-DIC configurations.
Table 2. Two-dimensional-DIC configurations.
Subset size161 × 161 pixels
Initial guessfast Fourier transformation accelerated cross-correlation (FFTCC)
Non-linear optimizationinverse compositional Gauss–Newton algorithm (ICGN)
Subset shape functionfirst-order shape function
Convergence criterion0.001 pixels
Stop condition10 iteration steps
Correlation criterionzero-mean normalized cross-correlation (ZNCC)
Interpolationbi-cubic B-spline interpolation
Table 3. PMSP evaluation parameters in accuracy verification.
Table 3. PMSP evaluation parameters in accuracy verification.
α v α g β g α s β d
0.512.55.00.54.0
Table 4. PMSP flow (individual camera evaluation) in accuracy verification.
Table 4. PMSP flow (individual camera evaluation) in accuracy verification.
aVGS f v ( V ) f g ( G ) f s ( S ) h
C 1 11.0000.0250.9961.0000.9910.9930.992
C 2 10.5920.4300.8990.1840.4060.7990.437
C 5 11.0000.0050.9941.0000.9930.9870.990
C 6 10.6040.1900.6470.2070.9320.2940.244
others0*******
Values shown in bold indicate the results associated with the selected camera pair. Evaluations for invisible cameras are excluded and replaced with labels “*”.
Table 5. PMSP flow (camera pair evaluation and selection) in accuracy verification.
Table 5. PMSP flow (camera pair evaluation and selection) in accuracy verification.
f d ( D ) C 1 C 2 C 5 C 6 OthersCamera GroupH
D
C 1 /0.9140.9040.982* C 1 C 2 0.396
C 2 0.775/0.9760.908* C 1 C 5 0.888
C 5 0.7471.107/0.936* C 1 C 6 0.237
C 6 1.1730.7580.853/* C 2 C 5 0.422
others****/ C 2 C 6 0.097
C 5 C 6 0.226
others*
Values shown in bold indicate the results associated with the selected camera pair. Evaluations for invisible cameras are excluded and replaced with labels “*”.
Table 6. Computational time of MPMC-DIC and visibility-only method.
Table 6. Computational time of MPMC-DIC and visibility-only method.
Time [s]
Method Calibration PVD 1 2D-DIC 3D-DE 2 DID 3 ICE 4 CPES 5 Total
visibility-only methodCPU 6135.893671.809.010.00/// 0 3816.70
GPU 7- 0 , 167.34--/// 00 , 312.24
MPMC-DICCPU 6135.893677.058.880.0027,077.756.870.1130,906.56
GPU 7- 0 , 167.81-- 00 , 313.47-- 00 , 633.03
1 PVD refers to projection and visibility determination. 2 Three-dimensional-DE refers to 3D displacement estimation. 3 DID refers to depth image determination. 4 ICE refers to individual camera evaluation. 5 CPES refers to camera pair evaluation and selection. 6 CPU indicates that execution times were recorded using sequential procedures on the CPU for algorithm implementation without multithreading. 7 GPU indicates that the execution times of PVD and DID were recorded using parallel procedures on the GPU for algorithm implementation with 32,768 threads. Other execution time with labels “-” remain the same as those recorded while using CPU.
Table 7. PMSP evaluation parameters in vibration measurement.
Table 7. PMSP evaluation parameters in vibration measurement.
α v α g β g α s β d
0.50.55.00.612.0
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Qin, W.; Wang, F.; Hu, S.; Shimasaki, K.; Ishii, I. Multi-Camera 3D Digital Image Correlation with Pointwise-Optimized Model-Based Stereo Pairing. Sensors 2025, 25, 5675. https://doi.org/10.3390/s25185675

AMA Style

Qin W, Wang F, Hu S, Shimasaki K, Ishii I. Multi-Camera 3D Digital Image Correlation with Pointwise-Optimized Model-Based Stereo Pairing. Sensors. 2025; 25(18):5675. https://doi.org/10.3390/s25185675

Chicago/Turabian Style

Qin, Wenxiang, Feiyue Wang, Shaopeng Hu, Kohei Shimasaki, and Idaku Ishii. 2025. "Multi-Camera 3D Digital Image Correlation with Pointwise-Optimized Model-Based Stereo Pairing" Sensors 25, no. 18: 5675. https://doi.org/10.3390/s25185675

APA Style

Qin, W., Wang, F., Hu, S., Shimasaki, K., & Ishii, I. (2025). Multi-Camera 3D Digital Image Correlation with Pointwise-Optimized Model-Based Stereo Pairing. Sensors, 25(18), 5675. https://doi.org/10.3390/s25185675

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop