Stereo Digital Image Correlation in MATLAB
Abstract
:1. Introduction
2. Framework Theory
2.1. Homogeneous Coordinates
2.2. Calibration
2.2.1. Pinhole Camera Model
2.2.2. Radial Distortion Model
2.2.3. Determining Calibration Parameters
2.3. Correlation
2.3.1. Shape Function
2.3.2. Interpolation
2.3.3. Correlation Criterion
2.3.4. Objective Function
2.3.5. Optimization Equation
2.3.6. Updating the SFPs
$${A}_{1}=2{u}_{x}+{u}_{x}^{2}+u{u}_{xx},$$

$${A}_{2}=2u{u}_{xy}+2\left(1+{u}_{x}\right){u}_{y},$$

$${A}_{3}={u}_{y}^{2}+u{u}_{yy},$$

$${A}_{4}=2u\left(1+{u}_{x}\right),$$

$${A}_{5}=2u{u}_{y},$$

$${A}_{6}={u}^{2},$$

$${A}_{7}=\frac{1}{2}\left(v{u}_{xx}+2\left(1+{u}_{x}\right){v}_{x}+u{v}_{xx}\right),$$

$${A}_{8}={u}_{y}{v}_{x}+{u}_{x}{v}_{y}+v{u}_{xy}+u{v}_{xy}+{v}_{y}+{u}_{x},$$

$${A}_{9}=\frac{1}{2}\left(v{u}_{yy}+2\left(1+{v}_{y}\right){u}_{y}+u{v}_{yy}\right),$$

$${A}_{10}=v+v{u}_{x}+u{v}_{x},$$

$${A}_{11}=u+v{u}_{y}+u{v}_{y},$$

$${A}_{12}=uv,$$

$${A}_{13}={v}_{x}^{2}+v{v}_{xx},$$

$${A}_{14}=2v{v}_{xy}+2{v}_{x}\left(1+{v}_{y}\right),$$

$${A}_{15}=2{v}_{y}+{v}_{y}^{2}+v{v}_{yy},$$

$${A}_{16}=2v{v}_{x},$$

$${A}_{17}=2v\left(1+{v}_{y}\right)$$

$$\mathrm{and}\text{}{A}_{18}={v}^{2}.$$

2.3.7. Stopping Criterion
2.4. Epipolar Geometry
2.5. StereoDIC Overview
2.6. Subset Matching
2.6.1. Temporal Matching
2.6.2. Stereo Matching
2.7. Polynomial Triangulation Method
2.8. Linear Triangulation Method
2.9. Displacement Transformation
3. Implementation
3.1. ADIC3D Function
3.2. Correlation Implementation
3.3. Stereo Matching Implementation
3.3.1. StereoMatch Function
3.3.2. FeatureMatch Function
3.4. Temporal Matching Implementation
3.4.1. PCM Function
3.4.2. ImgCorr Function
3.5. Displacement Transformation Implementation
3.5.1. CSTrans Function
3.5.2. Triangulation Function
4. Validation
4.1. Samples 1 and 2
4.2. Sample 5
5. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Glossary of Symbols
${a}_{1}$ through ${a}_{6}$  Affine transformation parameters 
${\alpha}_{j}$  Homogeneous scaling variable of the $j$^{th} camera 
$\mathit{B}$  Fundamental matrix 
${\mathit{B}}_{1}$  Fundamental matrix after translation of polynomial triangulation method 
${\mathit{B}}_{2}$  Fundamental matrix after rotation of polynomial triangulation method 
$\beta $  Gaussian filtering window size 
${c}_{{x}_{j}}$ and ${c}_{{y}_{j}}$  Translation applied to sensor coordinate system of the $j$^{th} camera 
${c}_{{s}_{j}}$  Skew of ideal sensor coordinate system of the $j$^{th} camera 
${C}_{ZNSSD}$  Zeromean normalized sum of squared difference correlation criterion 
${C}_{ZNCC}$  Zeromean normalized cross correlation criterion 
${C}_{ObjFun}$  Objective function of correspondence problem 
$D$  Geometric error cost function 
$\delta $  Function to determine the Euclidean distance between two points 
${\underset{\_}{\mathit{e}}}_{{s}_{j}}={\left[\begin{array}{ccc}{e}_{{s}_{j}}^{1}& {e}_{{s}_{j}}^{2}& {e}_{{s}_{j}}^{3}\end{array}\right]}^{T}$  Epipole of the $j$^{th} camera 
${E}_{proj}$  Total projection error 
${E}_{k}^{dist}$  Squared error distance of $k$^{th} keypoint 
$F$  Reference image 
$\mathit{f}$  Reference subset 
${f}_{i}$  Light intensity value of pixel $i$ of reference subset 
$\overline{f}$  Mean light intensity value of reference subset 
$\tilde{f}$  Normalization function of reference subset 
$\nabla {\mathit{f}}_{i}=\left[\begin{array}{cc}\frac{\partial {f}_{i}}{\partial x}& \frac{\partial {f}_{i}}{\partial y}\end{array}\right]$  Light intensity gradient of reference subset for pixel $i$ 
$G$  Deformed image 
$\mathit{g}$  Investigated subset 
${g}_{i}$  Light intensity value of pixel $i$ of investigated subset 
$\overline{g}$  Mean light intensity value of investigated subset 
$\tilde{g}$  Normalization function of investigated subset 
$\mathit{H}$  Hessian of optimization equation 
$I$  Number of pixels within a subset 
$\mathit{I}$  $3\times 3$ identity matrix 
$i$  Subscript indicating that a variable is associated with the $i$^{th} pixel of a subset 
$\mathit{J}$  Jacobian of optimization equation 
$j$  Subscript indicating that a variable is associated with the $j$^{th} camera 
${\mathit{K}}_{j}$  Intrinsic parameter matrix of the $j$^{th} camera 
$K$  Number of keypoints under consideration 
${\kappa}_{j}^{1}$ and ${\kappa}_{j}^{2}$  Radial distortion parameters of the $j$^{th} camera 
$L$  Number of calibration targets within a calibration image series 
${\underset{\_}{\mathit{\lambda}}}_{j}$  Epipolar line of the $j$^{th} camera 
$\mathit{M}$  $3\times 3$ matrix 
$\mathit{m}$  $3\times 1$ vector 
${\left[\mathit{m}\right]}_{\times}$  Skew symmetric matrix of $\mathit{m}$ 
$MAE$  Mean of the absolute error 
${\widehat{\mu}}_{{w}_{q}}^{true}$  True displacement magnitude 
${\widehat{\mu}}_{{w}_{q}}^{calc}$  Calculated displacement magnitude 
$\mathsf{\Omega}$  Homography applied to projection matrices 
$\mathit{\omega}$  Function to populate a square matrix with the shape function parameters 
$\mathit{P}$  Shape function parameters 
$\mathsf{\Delta}\mathit{P}$  Iterative improvement of shape function parameters 
${\mathit{P}}_{update}$  Updated shape function parameters 
$\Vert \mathsf{\Delta}\mathit{P}\Vert $  Stopping criterion 
$\mathsf{\Phi}$  Function representing conversion of coordinate from projective space to Euclidean space 
$\mathsf{\Psi}$  Function representing the triangulation methods 
${\phi}_{1}{\phi}_{4}$  Elements of the fundamental matrix for the polynomial triangulation method 
${\mathit{Q}}_{j}$  Projection matrix of the $j$^{th} camera 
${\mathit{Q}}_{{c}_{j}}$  Projection matrix of the $j$^{th} camera in canonical form 
${\mathit{q}}_{{c}_{j}}^{n}$  The $n$^{th} row of the canonical projection matrix of the $j$^{th} camera 
Q  Number of subset pairs per image pair 
${\mathit{R}}_{j}$  Rotation matrix for pinhole camera model of the $j$^{th} camera 
${\mathit{R}}_{j}^{pm}$  Rotation matrix of the $j$^{th} camera for the polynomial triangulation method 
$\mathit{S}$  Matrix for the linear triangulation method 
${\sigma}^{g}$  Standard deviation of Gaussian function 
$\sigma $  Standard deviation of the absolute error 
${\mathit{T}}_{j}$  Translation vector for pinhole camera model of the $j$^{th} camera 
${\mathit{T}}_{j}^{pm}$  Translation matrix of the $j$^{th} camera for polynomial triangulation method 
$t$  Variable for parameterization of epipolar lines 
$\tau $  Squared error distance threshold 
$u$  Displacement in xdirection in the distorted sensor coordinate system 
${u}_{x}$, ${u}_{xx}$,${u}_{y}$,${u}_{yy}$ and ${u}_{xy}$  Derivatives of the xdirection displacement 
${\widehat{u}}_{w}$  Displacement in xdirection in the world coordinate system 
${\mathit{V}}_{j}$  Extrinsic parameter matrix of the $j$^{th} camera 
$v$  Displacement in ydirection in the distorted sensor coordinate system 
${v}_{x}$, ${v}_{xx}$, ${v}_{y}$, ${v}_{yy}$ and ${v}_{xy}$  Derivatives of the ydirection displacement 
${\widehat{v}}_{w}$  Displacement in ydirection in the world coordinate system 
$\mathit{W}$  Shape function 
$\frac{\partial {\mathit{W}}_{i}}{\partial \mathit{P}}$  Jacobian of the shape function, in terms of the shape function parameters, for pixel $i$ 
${\widehat{w}}_{w}$  Displacement in zdirection in the world coordinate system 
${\widehat{\mathit{x}}}_{w}={\left[\begin{array}{ccc}{\widehat{x}}_{w}& {\widehat{y}}_{w}& {\widehat{z}}_{w}\end{array}\right]}^{T}$  Ideal coordinate in the world coordinate system 
${\widehat{\mathit{x}}}_{{s}_{j}}={\left[\begin{array}{cc}{\widehat{x}}_{{s}_{j}}& {\widehat{y}}_{{s}_{j}}\end{array}\right]}^{T}$  Ideal coordinate in the ideal sensor coordinate system of the $j$^{th} camera 
${\widehat{\mathit{x}}}_{{n}_{j}}={\left[\begin{array}{cc}{\widehat{x}}_{{n}_{j}}& {\widehat{y}}_{{n}_{j}}\end{array}\right]}^{T}$  Normalized ideal image coordinates of the $j$^{th} camera 
${\mathit{x}}_{{n}_{j}}={\left[\begin{array}{cc}{x}_{{n}_{j}}& {y}_{{n}_{j}}\end{array}\right]}^{T}$  Normalized distorted image coordinates of the $j$^{th} camera 
${\mathit{x}}_{{s}_{j}}={\left[\begin{array}{cc}{x}_{{s}_{j}}& {y}_{{s}_{j}}\end{array}\right]}^{T}$  Coordinate in the distorted sensor coordinate system of the $j$^{th} camera 
${\mathit{x}}_{l}^{true}={\left[\begin{array}{cc}{x}_{l}^{true}& {y}_{l}^{true}\end{array}\right]}^{T}$  True location of $l$^{th} calibration target in distorted sensor coordinate system 
${\mathit{x}}_{l}^{calc}={\left[\begin{array}{cc}{x}_{l}^{calc}& {y}_{l}^{calc}\end{array}\right]}^{T}$  Location of $l$^{th} calibration target in distorted sensor coordinate system predicted by the camera model 
${\mathit{x}}_{i}={\left[\begin{array}{cc}{x}_{i}& {y}_{i}\end{array}\right]}^{T}$  Pixel position of the $i$^{th} pixel of the reference subset in distorted sensor coordinate system 
${\mathit{x}}^{o}={\left[\begin{array}{cc}{x}^{o}& {y}^{o}\end{array}\right]}^{T}$  Center of reference subset in distorted sensor coordinate system 
$\Delta {\mathit{x}}_{i}={\left[\begin{array}{cc}\mathsf{\Delta}{x}_{i}& \mathsf{\Delta}{y}_{i}\end{array}\right]}^{T}$  Distance from the reference subset center to $i$^{th} pixel position of reference subset 
$\Delta {\mathit{x}}_{i}^{\prime}={\left[\begin{array}{cc}\mathsf{\Delta}{x}_{i}^{\prime}& \mathsf{\Delta}{y}_{i}^{\prime}\end{array}\right]}^{T}$  Distance from the reference subset center to $i$^{th} pixel position of investigated subset 
${\mathit{x}}_{i}^{\prime}={\left[\begin{array}{cc}{x}_{i}^{\prime}& {y}_{i}^{\prime}\end{array}\right]}^{T}$  $i$^{th} pixel position of the investigated subset in the distorted sensor coordinate system 
${\mathit{x}}_{{s}_{j}}^{o}={\left[\begin{array}{cc}{x}_{{s}_{j}}^{o}& {y}_{{s}_{j}}^{o}\end{array}\right]}^{T}$  Reference subset position in distorted sensor coordinate system of the $j$^{th} camera 
${\mathit{x}}_{{s}_{j}}^{d}={\left[\begin{array}{cc}{x}_{{s}_{j}}^{d}& {y}_{{s}_{j}}^{d}\end{array}\right]}^{T}$  Investigated subset position in the distorted sensor coordinate system of the $j$^{th} camera 
${\stackrel{\u02c7}{\mathit{x}}}_{{s}_{j}}^{o}={\left[\begin{array}{cc}{\stackrel{\u02c7}{x}}_{{s}_{j}}^{o}& {\stackrel{\u02c7}{y}}_{{s}_{j}}^{o}\end{array}\right]}^{T}$  Measured position of the reference subset in the ideal sensor coordinate system of the j^{th} camera 
${\stackrel{\u02c7}{\mathit{x}}}_{{s}_{j}}^{d}={\left[\begin{array}{cc}{\stackrel{\u02c7}{x}}_{{s}_{j}}^{d}& {\stackrel{\u02c7}{y}}_{{s}_{j}}^{d}\end{array}\right]}^{T}$  Measured position of the investigated subset in the ideal sensor coordinate system of the $j$^{th} camera 
${\stackrel{\u02c7}{\mathit{x}}}_{{s}_{j}}={\left[\begin{array}{cc}{\stackrel{\u02c7}{x}}_{{s}_{j}}& {\stackrel{\u02c7}{y}}_{{s}_{j}}\end{array}\right]}^{T}$  Measured coordinate in the ideal sensor coordinate system of the $j$^{th} camera 
${\widehat{\mathit{x}}}_{w}^{o}={\left[\begin{array}{ccc}{\widehat{x}}_{w}^{o}& {\widehat{y}}_{w}^{o}& {\widehat{z}}_{w}^{o}\end{array}\right]}^{T}$  Position of reference subset pair in the world coordinate system 
${\widehat{\mathit{x}}}_{w}^{d}={\left[\begin{array}{ccc}{\widehat{x}}_{w}^{d}& {\widehat{y}}_{w}^{d}& {\widehat{z}}_{w}^{d}\end{array}\right]}^{T}$  Position of investigated subset pair in the world coordinate system 
${\mathit{x}}_{{s}_{j}}^{k}={\left[\begin{array}{cc}{x}_{{s}_{j}}^{k}& {y}_{{s}_{j}}^{k}\end{array}\right]}^{T}$  $k$^{th} keypoint location in the first image of the image series of the $j$^{th} camera 
${\xi}_{{x}_{j}}$ and ${\xi}_{{y}_{j}}$  Scaling of metric units to pixels for the $j$^{th} camera 
$\zeta $  Maximum distance, along a single axis, from the center of the reference subset to a pixel in the reference subset 
Appendix A. ADIC3D Framework Code
 1.
 function ResultData=ADIC3D(FileNames1,FileNames2,Mask, GaussFilt,StepSize,SubSize,SubShape,SFOrder,RefStrat, StopCritVal,WorldCTs, ImgCTs)
 2.
 [~,ImNames1]=cellfun(@fileparts,FileNames1,’Uni’,0);
 3.
 [~,ImNames2]=cellfun(@fileparts,FileNames2,’Uni’,0);
 4.
 n=numel(FileNames1);
 5.
 [r,c]=size(im2double(imread(FileNames1{1})));
 6.
 [XosX,XosY]=meshgrid(((SubSize+1)/2+StepSize):StepSize:(c(SubSize+1)/21StepSize),((SubSize+1)/2+StepSize): StepSize:(r(SubSize+1)/21StepSize));
 7.
 Xos=[XosX(:)‘; XosY(:)’]; clear XosX, XosY;
 8.
 Xos=Xos(:,arrayfun(@(X,Y) min(min(Mask(Y(SubSize1)/2:Y+(SubSize1)/2, X(SubSize1)/2:X+(SubSize1)/2))),Xos(1,:),Xos(2,:))==1);
 9.
 ResultData.ProcData1=struct(‘ImgName’, ImNames1,‘ImgSize’, repmat({[r,c]},1,n),‘ImgFilt’,repmat({GaussFilt},1,n), ‘SubSize’,repmat({SubSize*ones([1,size(Xos,2)])},1,n), ‘SubShape’,repmat({repmat(SubShape,size(Xos,2),1)},1,n), ‘SFOrder’, repmat({repmat(SFOrder,1,size(Xos,2))},1,n), ‘Xos’,repmat({Xos},1,n),‘P’,repmat({zeros([12,size(Xos,2)])}, 1,n),‘C’,repmat({NaN([1,size(Xos,2)])},1,n),‘StopVal’, repmat({ones([1,size(Xos,2)])*StopCritVal},1,n), ‘Iter’,repmat({zeros([1,size(Xos,2)])},1,n));
 10.
 ResultData.ProcData2=struct(‘ImgName’, ImNames2, ‘ImgSize’, repmat({[r,c]},1,n), ‘ImgFilt’,repmat({GaussFilt},1,n), ‘SubSize’, repmat({SubSize*ones([1,size(Xos,2)])},1,n), ‘SubShape’,repmat({repmat(SubShape,size(Xos,2),1)},1,n), ‘SFOrder’,repmat({repmat(SFOrder,1,size(Xos,2))},1,n), ‘Xos’,repmat({Xos},1,n),‘P’,repmat({zeros([12,size(Xos,2)])}, 1,n),‘C’,repmat({NaN([1,size(Xos,2)])},1,n),‘StopVal’,repmat ({ones([1,size(Xos,2)])*StopCritVal},1,n),‘Iter’,repmat ({zeros([1,size(Xos,2)])},1,n));
 11.
 ResultData.Stereo=struct(‘P’,zeros([12,size(Xos,2)]), ‘C’, NaN([1,size(Xos,2)]),‘StopVal’,ones([1,size(Xos,2)])* StopCritVal, ‘Iter’,zeros([1,size(Xos,2)]));
 12.
 ResultData.DispTrans=struct(‘Xow’,repmat({NaN(3,size(Xos,2))},1,n),‘Uw’,repmat({NaN(3,size(Xos,2))},1,n),‘CamParams’, repmat({stereoParameters(cameraParameters,cameraParameters ,zeros(3,3), zeros(1,3))},1,n))
 13.
 ResultData=StereoMatch(n,ResultData,FileNames1,FileNames2, StopCritVal); % Section 2.6.2
 14.
 fprintf(‘\nFirst image set...\n’);
 15.
 ResultData.ProcData1=ImgCorr(n,ResultData.ProcData1, FileNames1, RefStrat,StopCritVal); % Section 2.6.1
 16.
 fprintf(‘\nSecond image set...\n’);
 17.
 ResultData.ProcData2=ImgCorr(n,ResultData.ProcData2, FileNames2, RefStrat,StopCritVal); % Section 2.6.1
 18.
 ResultData=CSTrans(n,ResultData,WorldCTs,ImgCTs,RefStrat); % Section 2.9
 1.
 function [f,dfdx,dfdy,dX,dY]=SubShapeExtract(SubSize, SubShape,Xos,F, dFdx,dFdy,SubExtract)
 2.
 switch SubShape
 3.
 case ‘Square’
 4.
 f(:)=reshape(SubExtract(F,Xos,SubSize),[SubSize*SubSize ,1]);
 5.
 dfdx(:)=reshape(SubExtract(dFdx,Xos,SubSize),[SubSize* SubSize,1]);
 6.
 dfdy(:)=reshape(SubExtract(dFdy,Xos,SubSize),[SubSize* SubSize,1]);
 7.
 [dX,dY]=meshgrid((SubSize1)/2:(SubSize1)/2,(SubSize1)/2: (SubSize1)/2); dX=dX(:); dY=dY(:);
 8.
 case ‘Circle’
 9.
 f=SubExtract(F,Xos,SubSize);
 10.
 dfdx=SubExtract(dFdx,Xos,SubSize);
 11.
 dfdy=SubExtract(dFdy,Xos,SubSize);
 12.
 [dX,dY]=meshgrid((SubSize1)/2:(SubSize1)/2,(SubSize1)/2: (SubSize1)/2);
 13.
 mask_keep=sqrt(abs(dX).^2+abs(dY).^2)<=(SubSize/20.5);
 14.
 f=f(mask_keep);
 15.
 dfdx=dfdx(mask_keep); dfdy=dfdy(mask_keep);
 16.
 dX=dX(mask_keep); dY=dY(mask_keep);
 17.
 end
 1.
 function [W,dFdWdP,SFPVec2Mat,Mat2SFPVec,StopCrit]= SFExpressions(SFOrder)
 2.
 switch SFOrder
 3.
 case 0 % Zero order SF
 4.
 W=@(dX,dY,P) [P(1)+dX,P(7)+dY]; % Equation (9)
 5.
 dFdWdP=@(dX,dY,dfdx,dfdy) [dfdx,dfdy];
 6.
 7.
 Mat2SFPVec=@(W) [W(7),0,0,0,0,0,W(8),0,0,0,0,0];
 8.
 StopCrit=@(dP,Zeta) sqrt(sum((dP’.*[1,0,0,0,0,0,1,0,0,0,0 ,0]).^2)); % Equation (23)
 9.
 case 1 % First order SF
 10.
 W=@(dX,dY,P) [P(1)+P(3).*dY+dX.*(P(2)+1), P(7)+P(8).*dX+dY .*(P(9)+1)]; % Equation (9)
 11.
 dFdWdP=@(dX,dY,dfdx,dfdy) [dfdx,dfdx.*dX,dfdx.*dY,dfdy,dfdy .*dX,dfdy.*dY];
 12.
 13.
 Mat2SFPVec=@(W) [W(7),W(1)1.0,W(4),0,0,0,W(8),W(2),W(5) 1.0,0,0,0];
 14.
 StopCrit=@(dP,Zeta) sqrt(sum((dP’.*[1,Zeta,Zeta,0,0, 0,1,Zeta, Zeta,0,0,0]).^2)); % Equation (23)
 15.
 case 2 % Second order SF
 16.
 W=@(dX,dY,P) [P(1)+P(3).*dY+P(4).*dX.^2.*(1/2)+P(6).* dY.^2.*(1/2)+dX.*(P(2)+1)+P(5).*dX.*dY,P(7)+P(8).*dX+ P(10).*dX.^2.*(1/2)+P(12).*dY.^2.*(1/2)+dY.*(P(9)+1)+ P(11).*dX.*dY]; % Equation (9)
 17.
 dFdWdP=@(dX,dY,dfdx,dfdy) [dfdx,dfdx.*dX,dfdx.*dY, (dfdx.*dX.^2)/2,dfdx.*dX.*dY,(dfdx.*dY.^2)/2,dfdy,dfdy.*dX ,dfdy.*dY,(dfdy.*dX.^2)/2,dfdy.*dX.*dY,(dfdy.*dY.^2)/2];
 18.
 SFPVec2Mat=@(P) reshape([P(2)*2+P(1)*P(4)+P(2)^2+1,P(1) *P(10)*1/2+P(4)*P(7)*(1/2)+P(8)*(P(2)*2+2)*1/2,P(7)*P(10) +P(8)^2,P(4)*1/2,P(10)*1/2,0,P(1)*P(5)*2+P(3)*(P(2)*2+2), P(2)+P(9)+P(2)*P(9)+P(3)*P(8)+P(1)*P(11)+P(5)*P(7)+1,P(7) *P(11)*2.0+P(8)*(P(9)+1)*2,P(5),P(11),0,P(1)*P(6)+P(3)^2, P(1)*P(12)*1/2+P(6)*P(7)*1/2+P(3)*(P(9)+1),P(9)*2+P(7)* P(12)+P(9)^2+1,P(6)*1/2,P(12)*1/2,0,P(1)*(P(2)+1)*2,P(7) +P(1)*P(8)+P(2)*P(7),P(7)*P(8)*2,P(2)+1,P(8),0,P(1)*P(3)*2, P(1)+P(1)*P(9)+P(3)*P(7),P(7)*(P(9)+1)*2,P(3),P(9)+1,0,P(1) ^2,P(1)*P(7),P(7)^2,P(1),P(7),1],[6,6]); % Equation (22)
 19.
 Mat2SFPVec=@(W) [W(34),W(22)1,W(28),W(4).*2,W(10),W(16) .*2,W(35), W(23),W(29)1,W(5).*2,W(11),W(17).*2];
 20.
 StopCrit=@(dP,Zeta) sqrt(sum((dP’.*[1,Zeta,Zeta,0.5* Zeta.^2,Zeta.^2,0.5*Zeta.^2,1,Zeta,Zeta,0.5*Zeta.^2,Zeta.^2 ,0.5*Zeta.^2]).^2)); % Equation (23)
 21.
 end
 1.
 function [P,C,Iter,StopVal]=SubCorr(InterpCoef,f,dfdx,dfdy,SubSize, SFOrder,Xos,dX,dY,P,StopCritVal)
 2.
 [W,dFdWdP,SFPVec2Mat,Mat2SFPVec,StopCrit]=SFExpressions( SFOrder); % Section 3.2
 3.
 dfdWdP=dFdWdP(dX(:),dY(:),dfdx(:),dfdy(:));
 4.
 Hinv=inv(dfdWdP’*dfdWdP); % inverse of Equation (19)
 5.
 f_bar=mean(f(:)); f_tilde=sqrt(sum((f(:)f_bar).^2)); % Equations (13) and (14)
 6.
 flag=0; Iter=1; dP=ones(size(P));
 7.
 while flag==0
 8.
 [dXY]=W(dX(:),dY(:),P); % Equation (7)
 9.
 g=InterpCoef(Xos(2).*ones(size(dXY,1),1)+dXY(:,2), Xos(1).*ones(size(dXY,1),1)+dXY(:,1));
 10.
 g_bar=mean(g(:)); g_tilde=sqrt(sum((g(:)g_bar).^2)); % Equations (13) and (14)
 11.
 StopVal=StopCrit(dP,(SubSize1)/2); % Equation (23)
 12.
 if any([StopVal<StopCritVal,Iter>=100])
 13.
 flag=1;
 14.
 C=1sum(((f(:)f_bar)/f_tilde(g(:)g_bar)/g_tilde).^2) /2; % Equation (12) substituted into Equation (15)
 15.
 else
 16.
 J=dfdWdP’*(f(:)f_barf_tilde/g_tilde*(g(:)g_bar)); % Summation of Equation (18)
 17.
 dP([1:SFOrder*3+0^SFOrder 7:6+SFOrder*3+0^SFOrder])=Hinv*J; % Equation (18)
 18.
 P=Mat2SFPVec(SFPVec2Mat(P)/SFPVec2Mat(dP)); % Equation (21)
 19.
 end
 20.
 Iter=Iter+1;
 21.
 end
 1.
 function RD=StereoMatch(n,RD,ImNames1,ImNames2,StopCritVal)
 2.
 SubExtract=@(Mat,Xos,SubSize) Mat(Xos(2)(SubSize1)/2: Xos(2)+(SubSize1)/2,Xos(1)(SubSize1)/2:Xos(1)+(SubSize1)/2); tic;
 3.
 F=im2double(imread(ImNames1{1}));
 4.
 if all(RD.ProcData1(1).ImgFilt), F=imgaussfilt(F, RD.ProcData1(1).ImgFilt(1),‘FilterSize’, RD.ProcData1(1).ImgFilt(2)); end
 5.
 G=im2double(imread(ImNames2{1}));
 6.
 if all(RD.ProcData1(1).ImgFilt), G=imgaussfilt(G, RD.ProcData1(1).ImgFilt(1),‘FilterSize’, RD.ProcData1(1).ImgFilt(2)); end
 7.
 InterpCoef=griddedInterpolant({1:1:size(G,1),1:1:size(G,2)}, G, ‘spline’);
 8.
 [dFdx,dFdy]=imgradientxy(F, ‘prewitt’);
 9.
 P=FeatureMatch(RD.ProcData1,1,F,G,SubExtract); % Section 3.3.2
 10.
 C=NaN(1,size(P,2)); Iter=NaN(1,size(P,2)); StopVal=NaN(1,size(P,2));
 11.
 for q=1:size(P,2) % can be changed to parfor for parallel processing
 12.
 if (sum(isnan(P(:,q)))==0)&&(sum(isnan(RD.ProcData1(1).Xos (:,q)))==0)
 13.
 [f,dfdx,dfdy,dX,dY]=SubShapeExtract( RD.ProcData1(1).SubSize(q),RD.ProcData1(1).SubShape(q,:) ,RD.ProcData1(1).Xos(:,q),F,dFdx,dFdy,SubExtract); % Section 3.2
 14.
 [Pout(:,q),C(q),Iter(q),StopVal(q)]=SubCorr(InterpCoef,f ,dfdx,dfdy,RD.ProcData1(1).SubSize(q), RD.ProcData1(1).SFOrder(q),RD.ProcData1(1).Xos(:,q),dX, dY,P(:,q), StopCritVal); % Section 3.2
 15.
 end
 16.
 end
 17.
 RD.Stereo.P=Pout; RD.Stereo.C=C; RD.Stereo.Iter=Iter; RD.Stereo.StopVal=StopVal;
 18.
 for d=1:n % determine subset positions in the FIS2 using Equation (32)
 19.
 RD.ProcData2(d).Xos(1,:)=RD.ProcData2(d).Xos(1,:)+ round(RD.Stereo.P(1,:)); RD.ProcData2(d).Xos(2,:)= RD.ProcData2(d).Xos(2,:)+round(RD.Stereo.P(7,:));
 20.
 end
 21.
 FailedSubsetsCondition=(RD.Stereo.C>=0.6)==0(RD.ProcData2(1) .Xos(1,:)+(RD.ProcData2(1).SubSize(:)’1)/2>size(G,2))( RD.ProcData2(1).Xos(1,:)(RD.ProcData2(1).SubSize(:)’1)/2<1) (RD.ProcData2(1).Xos(2,:)+(RD.ProcData2(1).SubSize(:)’ 1)/2>size(G,1))(RD.ProcData2(1).Xos(2,:)(RD.ProcData2(1).SubSize(:)’1)/2<1);
 22.
 FailedSubsets=find(FailedSubsetsCondition); PassedSubsets=find(FailedSubsetsCondition==0);
 23.
 for d=1:n
 24.
 RD.ProcData1(d).Xos(:,FailedSubsets)=NaN(2,size( FailedSubsets,2));
 25.
 RD.ProcData2(d).Xos(:,FailedSubsets)=NaN(2,size( FailedSubsets,2));
 26.
 end
 27.
 fprintf('Stereo results\t Time (s) CC (min)  CC (mean)  Iter (max)\n\t\t\t\t %7.3f  % .5f  % .6f  %4.0f \nSubsets that failed stereo matching %d/%d\n',toc,min(RD.Stereo.C(PassedSubsets)), nanmean(RD.Stereo.C(PassedSubsets)),max(RD.Stereo.Iter( PassedSubsets)),size(FailedSubsets,2),size(P,2));
 1.
 function [P]=FeatureMatch(PD,d,F,G,SubExtract)
 2.
 if exist(‘vl_sift’)~=3, fprintf(‘\nError occurred, please setup the VLFeat library required for SIFT feature matching (algorithm can be found at: https://www.vlfeat.org\n’); end; time_before_sift=toc;
 3.
 [xk1,d1] = vl_sift(im2single(uint8(255 * F)));
 4.
 [xk2,d2] = vl_sift(im2single(uint8(255 * G)));
 5.
 KptsInVacinity=((abs(PD(d).Xos(2,:)xk1(2,:)’)<=PD(d).SubSize /2) +(abs(PD(d).Xos(1,:)xk1(1,:)’)<=PD(d).SubSize/2))==2;
 6.
 xk1=xk1(:,sum(KptsInVacinity,2)>=1); d1=d1(:,sum(KptsInVacinity,2)>=1);
 7.
 [matches, scores] = vl_ubcmatch(d1, d2,1.25);
 8.
 xk1=xk1(1:2,matches(1,:))’; xk2=xk2(1:2,matches(2,:))’;
 9.
 relevantKpts=knnsearch(xk1,PD(d).Xos’,’K’,20);
 10.
 RansacModel=@(kpts) [[kpts(:,1) kpts(:,2) ones(size(kpts(:,1) ,1),1)]\kpts(:,3)[1; 0; 0];[kpts(:,1) kpts(:,2) ones(size (kpts(:,1),1),1)]\kpts(:,4)[0; 1; 0]]; % solves for affine transformation parameters of Equation (29)
 11.
 RansacError=@(a, kpts) sum((kpts(:,3:4)’[1+a(1), a(2), a(3); a(4), 1+a(5), a(6)]*[kpts(:,1)’; kpts(:,2)’;ones(1, size(kpts(:,1)’,2))]).^2,1); % Equation(30)
 12.
 P=NaN(12,size(PD(d).Xos,2));
 13.
 for q=1:size(PD(d).Xos,2) % can be changed to parfor for parallel processing
 14.
 try
 15.
 [a,~] = ransac([xk1(relevantKpts(q,:),:),xk2( relevantKpts(q,:),:)], @(data) RansacModel(data), @(model,data) RansacError(model,data), 3,1,’Confidence’, 99.5);
 16.
 P(:,q)=[a(1)*PD(d).Xos(1,q)+a(2)*PD(d).Xos(2,q)+a(3); a(1); a(2); 0; 0; 0;a(4)*PD(d).Xos(1,q)+a(5)*PD(d).Xos(2,q)+a(6); a(4); a(5); 0; 0; 0]; % Equation (31)
 17.
 end
 18.
 end
 19.
 fprintf(‘SIFT found %d matching keypoints in %5.2f seconds\n’,size(matches,2),toctime_before_sift);
 1.
 function [u,v]=PCM(F,G,SubSize,XosX,XosY,SubExtract)
 2.
 if (isnan(XosX)==0)&&(isnan(XosY)==0)
 3.
 NCPS=(fft2(SubExtract(F,[XosX,XosY],SubSize)).*conj (fft2(SubExtract(G,[XosX,XosY],SubSize))))./abs(fft2( SubExtract(F,[XosX,XosY],SubSize)).*conj(fft2(SubExtract (G,[XosX,XosY],SubSize))));
 4.
 CC=(ifft2(NCPS));
 5.
 [vid,uid]=find(CC==max(CC(:)));
 6.
 IndShift=ifftshift(fix(SubSize/2):ceil(SubSize/2)1);
 7.
 u=IndShift(uid);
 8.
 v=IndShift(vid);
 9.
 else
 10.
 u=NaN; v=NaN;
 11.
 end
 1.
 function PD=ImgCorr(n,PD,FileNames,RefStrat,StopCritVal)
 2.
 SubExtract=@(Mat,Xos,SubSize) Mat(Xos(2)(SubSize1)/2: Xos(2)+(SubSize1)/2,Xos(1)(SubSize1)/2:Xos(1)+(SubSize 1)/2);
 3.
 for d=2:n, tic; % outer loop
 4.
 G=im2double(imread(FileNames{d}));
 5.
 if all(PD(d).ImgFilt), G=imgaussfilt(G,PD(d).ImgFilt(1), ‘FilterSize’,PD(d).ImgFilt(2)); end
 6.
 InterpCoef=griddedInterpolant({1:1:size(G,1),1:1:size(G,2)} ,G, ‘spline’); % Section 2.3.2
 7.
 if any([RefStrat==1,d==2])
 8.
 F=im2double(imread(FileNames{d1}));
 9.
 if all(PD(d).ImgFilt), F=imgaussfilt(F,PD(d).ImgFilt(1) ,’FilterSize’,PD(d).ImgFilt(2)); end
 10.
 [dFdx,dFdy]=imgradientxy(F,’prewitt’);
 11.
 PD(d).Xos(1,:)=PD(d1).Xos(1,:)+fix(PD(d1).P(1,:));
 12.
 PD(d).Xos(2,:)=PD(d1).Xos(2,:)+fix(PD(d1).P(7,:));
 13.
 [PD(d).P(1,:),PD(d).P(7,:)]=arrayfun(@(XosX,XosY,SubSize )PCM(F,G,SubSize,XosX,XosY,SubExtract),PD(d).Xos(1,:), PD(d).Xos(2,:),PD(d).SubSize); % Section 3.4.1
 14.
 else
 15.
 PD(d).P=PD(d1).P;
 16.
 end
 17.
 P=NaN(size(PD(d).P)); C=NaN(size(PD(d).C)); Iter=NaN(size(PD(d).C)); StopVal=NaN(size(PD(d).C));
 18.
 for q=1:size(PD(d).Xos,2) % inner loop (can be changed to parfor for parallel processing)
 19.
 if (sum(isnan(PD(d).P(:,q)))==0)&&(sum(isnan( PD(d).Xos(:,q)))==0)
 20.
 [f,dfdx,dfdy,dX,dY]=SubShapeExtract(PD(d).SubSize(q), PD(d).SubShape(q,:),PD(d).Xos(:,q),F,dFdx,dFdy, SubExtract); % Section 3.2
 21.
 [P(:,q),C(q),Iter(q),StopVal(q)]=SubCorr(InterpCoef,f ,dfdx,dfdy,PD(d).SubSize(q),PD(d).SFOrder(q), PD(d).Xos(:,q),dX,dY,PD(d).P(:,q),StopCritVal); % Section 3.2
 22.
 end
 23.
 end
 24.
 PD(d).P=P; PD(d).C=C; PD(d).Iter=Iter; PD(d).StopVal=StopVal;
 25.
 if rem(d2,10)==0, fprintf('Image/Total Time (s)  CC (min)  CC (mean)  Iter (max) \n'); end
 26.
 fprintf(' %4.d/%4.d  %8.3f  %.6f  %.7f  %4.0f \n',d,n,toc,min(PD(d).C),nanmean(PD(d).C),max(PD(d).Iter));
 27.
 end
 1.
 function RD=CSTrans(n,RD,WorldCTs,ImgCTs,RefStrat)
 2.
 CamParams=estimateCameraParameters(ImgCTs,WorldCTs, ‘NumRadialDistortionCoefficients’,2); % Section 2.2
 3.
 Q1=[CamParams.CameraParameters1.IntrinsicMatrix’,[0; 0; 0]]*[[eye(3), [0;0;0]]; 0, 0, 0, 1];
 4.
 Q2=[CamParams.CameraParameters2.IntrinsicMatrix’,[0; 0; 0]]*[[CamParams.RotationOfCamera2’, CamParams.TranslationOfCamera2’]; 0, 0, 0, 1];
 5.
 B=CamParams.FundamentalMatrix;
 6.
 for d=1:n, tic
 7.
 Xds1=RD.ProcData1(d).Xos+[RD.ProcData1(d).P(1,:); RD.ProcData1(d).P(7,:)]; % Equation (49)
 8.
 Xds2=RD.ProcData2(d).Xos+[RD.ProcData2(d).P(1,:); RD.ProcData2(d).P(7,:)]; % Equation (49)
 9.
 indValid=find((isnan(Xds1(1,:))+isnan(Xds1(2,:)) +isnan(Xds2(1,:))+isnan(Xds2(2,:)))==0);
 10.
 if d==1RefStrat==1
 11.
 RD.DispTrans(d).Xow(:,indValid)=Triangulation(B,Q1,Q2, undistortPoints(RD.ProcData1(d).Xos(:,indValid)’, CamParams.CameraParameters1)’,undistortPoints( RD.ProcData2(d).Xos(:,indValid)’, CamParams.CameraParameters2)’);
 12.
 else
 13.
 RD.DispTrans(d).Xow=RD.DispTrans(d1).Xow;
 14.
 end
 15.
 RD.DispTrans(d).Uw(:,indValid)=Triangulation(B,Q1,Q2, undistortPoints(Xds1(:,indValid)’, CamParams.CameraParameters1)’,undistortPoints(Xds2(:, indValid)’, CamParams.CameraParameters2)’)RD.DispTrans(d).Xow (:,indValid); % Equation (51)
 16.
 RD.DispTrans(d).CamParams=CamParams;
 17.
 fprintf(‘CS transformation image: %d/%d\t\ttime:%.3f\n’,d,n,toc);
 18.
 end
 1.
 function [ptsOut]=Triangulation(B,Q1,Q2,pts1,pts2)
 2.
 for i=1:size(pts1,2)
 3.
 T1inv=[1, 0, pts1(1,i); 0, 1, pts1(2,i); 0, 0, 1]; % Equation (35)
 4.
 T2inv=[1, 0, pts2(1,i); 0, 1, pts2(2,i); 0, 0, 1]; % Equation (35)
 5.
 B1=T2inv’*B*T1inv; % Equation (36)
 6.
 [U,~,V]=svd(B1,0);
 7.
 e1=V(:,3)./norm(V(1:2,3));
 8.
 e2=U(:,3)./norm(U(1:2,3));
 9.
 R1=[e1(1), e1(2), 0; e1(2), e1(1), 0; 0, 0, 1]; % Equation (37)
 10.
 R2=[e2(1), e2(2), 0; e2(2), e2(1), 0; 0, 0, 1]; % Equation (37)
 11.
 B2=R2*B1*R1’; % Equation (38)
 12.
 phi_1=B2(2,2); phi_2=B2(2,3); phi_3=B2(3,2); phi_4=B2(3,3);
 13.
 p=[ phi_4*phi_1^2*phi_3*e1(3)^4 + phi_2*phi_1*phi_3^2*e1(3)^4, phi_1^4 + 2*phi_1^2*phi_3^2*e2(3)^2  phi_1^2*phi_4^2*e1(3)^4 + phi_2^2*phi_3^2*e1(3)^4 + phi_3^4*e2(3)^4, 4*phi_1^3*phi_2  2*phi_1^2*phi_3*phi_4*e1(3)^2 + 4*phi_1^2*phi_3*phi_4*e2(3)^2 + 2*phi_1*phi_2*phi_3^2*e1(3)^2 + 4*phi_1*phi_2*phi_3^2*e2(3)^2  phi_1*phi_2*phi_4^2*e1(3)^4 + phi_2^2*phi_3*phi_4*e1(3)^4 + 4*phi_3^3*phi_4*e2(3)^4, 6*phi_1^2*phi_2^2  2*phi_1^2*phi_4^2*e1(3)^2 + 2*phi_1^2*phi_4^2*e2(3)^2 + 8*phi_1*phi_2*phi_3*phi_4*e2(3)^2 + 2*phi_2^2*phi_3^2*e1(3)^2 + 2*phi_2^2*phi_3^2*e2(3)^2 + 6*phi_3^2*phi_4^2*e2(3)^4,  phi_1^2*phi_3*phi_4 + 4*phi_1*phi_2^3 + phi_1*phi_2*phi_3^2  2*phi_1*phi_2*phi_4^2*e1(3)^2 + 4*phi_1*phi_2*phi_4^2*e2(3)^2 + 2*phi_2^2*phi_3*phi_4*e1(3)^2 + 4*phi_2^2*phi_3*phi_4*e2(3)^2 + 4*phi_3*phi_4^3*e2(3)^4,  phi_1^2*phi_4^2 + phi_2^4 + phi_2^2*phi_3^2 + 2*phi_2^2*phi_4^2*e2(3)^2 + phi_4^4*e2(3)^4, phi_3*phi_2^2*phi_4  phi_1*phi_2*phi_4^2];
 14.
 r=roots(p); % determine roots of polynomial of Equation (43)
 15.
 r=r(imag(r)==0);
 16.
 Ds=r.^2./(1+(r.*e1(3)).^2)+(phi_3.*r+phi_4).^2./((phi_1.*r +phi_2).^2+e2(3)^2*(phi_3.*r+phi_4).^2); % Equation (42)
 17.
 [t]=min(Ds);
 18.
 pts1temp=T1inv*R1’*[t^2*e1(3); t; t^2*e1(3)^2+1]; % Equation (44)
 19.
 pts2temp=T2inv*R2’*[e2(3)*(phi_3*t+phi_4)^2; (phi_1*t+phi_2)*(phi_3*t+phi_4); (phi_1*t+phi_2)^2+e2(3)^2*(phi_3*t+phi_4)^2]; % Equation (45)
 20.
 ptsOut1=pts1temp(1:2)./pts1temp(3); ptsOut2=pts2temp(1:2)./pts2temp(3);
 21.
 [~,~,V]=svd([ptsOut1(1)*Q1(3,:)Q1(1,:); ptsOut1(2)* Q1(3,:)Q1(2,:); ptsOut2(1)*Q2(3,:)Q2(1,:); ptsOut2(2)* Q2(3,:)Q2(2,:)],0); % Section 2.8
 22.
 ptsOut(:,i)=V(1:3,4)./V(4,4);
 23.
 end
Appendix B. Utilizing Parallel Processing for ADIC3D
Appendix C. UndistortPasser Function
Line Numbers  Task Performed 

Line 2–3  Determine how many times the number of subsets is divisible by 100 and save as iterations; 
Line 4  for q = 1 to iterations, do 
Line 5  Undistort a batch of 100 subset positions using MATLAB’s undistortPoints function; 
Line 6  end for 
Line 7–9  Undistort the remaining subsets, not processed in the for loop, using undistortPoints; 
 1.
 function XosOut=UndistortPasser(Xos,CamParams)
 2.
 n=size(Xos,1);
 3.
 iterations=floor(n/100);
 4.
 for i=1:iterations
 5.
 XosOut((i1)*100+1:i*100,:)=undistortPoints(Xos((i  1)*100+1:i*100,:),CamParams);
 6.
 end
 7.
 if rem(n,100)>0
 8.
 XosOut(iterations*100+1:iterations*100+rem(n,100),:)= undistortPoints(Xos(iterations*100+1:iterations*100+rem(n, 100),:),CamParams);
 9.
 end
Appendix D. Imposed Displacements of Samples 1
Step  ${\widehat{\mathit{u}}}_{\mathit{w}}\text{}\mathbf{Mean}\text{}\left(\mathbf{mm}\right)$  U STD (nm)  ${\widehat{\mathit{w}}}_{\mathit{w}}\text{}\mathbf{Mean}\text{}\left(\mathbf{mm}\right)$  W STD (nm) 

1  0  7.01  0  6.76 
2  0  7.69  10  6.16 
3  0  6.30  20  6.21 
4  0  7.67  −10  6.12 
5  0  6.74  −20  6.33 
6  −10  4.91  0  6.83 
7  −20  5.71  0  7.27 
8  10  6.53  0  6.79 
9  20  5.69  0  7.37 
10  −10  5.99  −10  4.57 
11  −20  14.65  −20  25.19 
12  10  7.65  10  6.43 
13  20  6.10  20  6.54 
14  −10  5.70  10  6.08 
15  −20  5.14  20  6.45 
16  10  6.29  −10  5.01 
17  20  5.99  −20  6.07 
18  0  6.36  0  7.59 
Appendix E. Accounting for Tangential Distortion
References
 Pan, B.; Xie, H.; Wang, Z.; Qian, K.; Wang, Z. Study on subset size selection in digital image correlation for speckle patterns. Opt. Express 2008, 16, 7037–7048. [Google Scholar] [CrossRef] [PubMed]
 Wang, Y.Q.; Sutton, M.A.; Bruck, H.A.; Schreier, H.W. Quantitative error assessment in pattern matching: Effects of intensity pattern noise, interpolation, strain and image contrast on motion measurements. Strain 2009, 45, 160–178. [Google Scholar] [CrossRef]
 Schreier, H.W.; Braasch, J.R.; Sutton, M.A. Systematic errors in digital image correlation caused by intensity interpolation. Opt. Eng. 2000, 39, 2915–2921. [Google Scholar] [CrossRef]
 Schreier, H.W.; Sutton, M.A. Systematic errors in digital image correlation due to undermatched subset shape functions. Exp. Mech. 2002, 42, 303–310. [Google Scholar] [CrossRef]
 Balcaen, R.; Reu, P.L.; Lava, P.; Debruyne, D. StereoDIC uncertainty quantification based on simulated images. Exp. Mech. 2017, 57, 939–951. [Google Scholar] [CrossRef]
 Wang, Y.Q.; Sutton, M.A.; Ke, X.D.; Schreier, H.W.; Reu, P.L.; Miller, T.J. On Error Assessment in Stereobased Deformation Measurements. Exp. Mech. 2011, 51, 405–422. [Google Scholar] [CrossRef]
 Hu, Z.; Xie, H.; Lu, J.; Wang, H.; Zhu, J. Error evaluation technique for threedimensional digital image correlation. Appl. Opt. 2011, 50, 6239–6247. [Google Scholar] [CrossRef]
 Kammers, A.D.; Daly, S. Selfassembled nanoparticle surface patterning for improved digital image correlation in a scanning electron microscope. Exp. Mech. 2013, 53, 1333–1341. [Google Scholar] [CrossRef]
 Jin, H.; Lu, W.Y.; Korellis, J. Microscale deformation measurement using the digital image correlation technique and scanning electron microscope imaging. J. Strain Anal. Eng. Des. 2008, 43, 719–728. [Google Scholar] [CrossRef]
 Rosu, A.M.; PierrotDeseilligny, M.; Delorme, A.; Binet, R.; Klinger, Y. Measurement of ground displacement from optical satellite image correlation using the free opensource software micmac. ISPRS J. Photogramm. Remote Sens. 2015, 100, 48–59. [Google Scholar] [CrossRef]
 Zhang, D.; Arola, D.D. Applications of digital image correlation to biological tissues. J. Biomed. Opt. 2004, 9, 691–700. [Google Scholar] [CrossRef]
 Palanca, M.; Tozzi, G.; Cristofolini, L. The use of digital image correlation in the biomechanical area: A review. Int. Biomech. 2016, 3, 1–21. [Google Scholar] [CrossRef]
 Libertiaux, V.; Pascon, F.; Cescotto, S. Experimental verification of brain tissue incompressibility using digital image correlation. J. Mech. Behav. Biomed. Mater. 2011, 4, 1177–1185. [Google Scholar] [CrossRef]
 Sutton, M.A.; Ke, X.; Lessner, S.M.; Goldbach, M.; Yost, M.; Zhao, F.; Schreier, H.W. Strain field measurements on mouse carotid arteries using microscopic threedimensional digital image correlation. J. Biomed. Mater. Res. Part A 2008, 84, 178–190. [Google Scholar] [CrossRef]
 Genovese, K.; Lee, Y.U.; Lee, A.Y.; Humphrey, J.D. An improved panoramic digital image correlation method for vascular strain analysis and material characterization. J. Mech. Behav. Biomed. Mater. 2013, 27, 132–142. [Google Scholar] [CrossRef] [Green Version]
 Masc, C.M.; Hoag, A.; Hoult, N.A.; Take, W.A. Field monitoring of a bridge using digital image correlation. Proc. Inst. Civ. Eng. Bridg. Eng. 2015, 168, 3–12. [Google Scholar]
 Yoneyama, S.; Kitagawa, A.; Iwata, S.; Tani, K.; Kikuta, H. Bridge deflection measurement using digital image correlation. Exp. Tech. 2007, 31, 34–40. [Google Scholar] [CrossRef]
 Winkler, J.; Specialist, C.; Hansen, M.D. Innovative longterm monitoring of the great belt bridge expansion joint using digital image correlation. Struct. Eng. Int. 2018, 28, 347–352. [Google Scholar] [CrossRef]
 Yoneyama, S.; Ueda, H. Bridge deflection measurement using digital image correlation with camera movement correction. Mater. Trans. 2012, 53, 285–290. [Google Scholar] [CrossRef] [Green Version]
 Sousa, P.J.; Barros, F.; Lobo, P.; Tavares, P.J.; Moreira, P.M.G.P. Experimental measurement of bridge deflection using digital image correlation. Procedia Struct. Integr. 2019, 17, 806–811. [Google Scholar] [CrossRef]
 Ngeljaratan, L.; Moustafa, M.A. Structural health monitoring and seismic response assessment of bridge structures using targettracking digital image correlation. Eng. Struct. 2020, 213, 110551. [Google Scholar] [CrossRef]
 Tian, L.; Pan, B. Remote bridge deflection measurement using an advanced video deflectometer and actively illuminated LED targets. Sensors 2016, 16, 1344. [Google Scholar] [CrossRef] [PubMed] [Green Version]
 Tung, S.H.; Shih, M.H.; Sung, W.P. Development of digital image correlation method to analyse crack variations of masonry wall. Sadhana 2008, 33, 767–779. [Google Scholar] [CrossRef]
 Nghiem, H.L.; Al Heib, M.; Emeriault, F. Method based on digital image correlation for damage assessment in masonry structures. Eng. Struct. 2015, 86, 1–15. [Google Scholar] [CrossRef]
 Ghorbani, R.; Matta, F.; Sutton, M.A. Fullfield deformation measurement and crack mapping on confined masonry walls using digital image correlation. Exp. Mech. 2015, 55, 227–243. [Google Scholar] [CrossRef]
 Mazzanti, P.; Caporossi, P.; Muzi, R. Sliding time master digital image correlation analyses of cubesat images for landslide monitoring: The Rattlesnake Hills landslide (USA). Remote Sens. 2020, 12, 592. [Google Scholar] [CrossRef] [Green Version]
 Caporossi, P.; Mazzanti, P.; Bozzano, F. Digital image correlation (DIC) analysis of the 3 December 2013 Montescaglioso landslide (Basilicata, southern Italy): Results from a multidataset investigation. ISPRS Int. J. GeoInf. 2018, 7, 372. [Google Scholar] [CrossRef] [Green Version]
 Bickel, V.T.; Manconi, A.; Amann, F. Quantitative assessment of digital image correlation methods to detect and monitor surface displacements of large slope instabilities. Remote Sens. 2018, 10, 865. [Google Scholar] [CrossRef] [Green Version]
 Daehne, A.; Corsini, A. Kinematics of active earthflows revealed by digital image correlation and DEM subtraction techniques applied to multitemporal LiDAR data. Earth Surf. Process. Landf. 2013, 38, 640–654. [Google Scholar] [CrossRef]
 Dridi, S.; Morestin, F.; Dogui, A. Use of digital image correlation to analyse the shearing deformation in woven fabric. Exp. Tech. 2012, 36, 46–52. [Google Scholar] [CrossRef]
 Hursa, A.; Rolich, T.; Ražić, S.E. Determining pseudo Poisson’s ratio of woven fabric with a digital image correlation method. Text. Res. J. 2009, 79, 1588–1598. [Google Scholar] [CrossRef]
 Helfrick, M.N.; Niezrecki, C.; Avitabile, P.; Schmidt, T. 3D digital image correlation methods for fullfield vibration measurement. Mech. Syst. Signal Process. 2011, 25, 917–927. [Google Scholar] [CrossRef]
 Yu, L.; Pan, B. Singlecamera highspeed stereodigital image correlation for fullfield vibration measurement. Mech. Syst. Signal Process. 2017, 94, 374–383. [Google Scholar] [CrossRef]
 Beberniss, T.J.; Ehrhardt, D.A. Highspeed 3D digital image correlation vibration measurement: Recent advancements and noted limitations. Mech. Syst. Signal Process. 2017, 86, 35–48. [Google Scholar] [CrossRef]
 Rajaram, S.; Vanniamparambil, P.A.; Khan, F.; Bolhassani, M.; Koutras, A.; Bartoli, I.; Moon, F.; Hamid, A.; Benson Shing, P.; Tyson, J.; et al. Fullfield deformation measurements during seismic loading of masonry buildings. Struct. Control Health Monit. 2017, 24, e1903. [Google Scholar] [CrossRef]
 Winkler, J.; Hendy, C. Improved structural health monitoring of London’s docklands light railway bridges using digital image correlation. Struct. Eng. Int. 2017, 27, 435–440. [Google Scholar] [CrossRef]
 LeBlanc, B.; Niezrecki, C.; Avitabile, P.; Chen, J.; Sherwood, J. Damage detection and full surface characterization of a wind turbine blade using threedimensional digital image correlation. Struct. Health Monit. 2013, 12, 430–439. [Google Scholar] [CrossRef]
 Peters, W.H.; Ranson, W.F. Digital imaging techniques in experimental stress analysis. Opt. Eng. 1982, 21, 427–431. [Google Scholar] [CrossRef]
 Luo, P.F.; Chao, Y.J.; Sutton, M.A.; Peters, W.H. Accurate measurement of threedimensional deformations in deformable and rigid bodies using computer vision. Exp. Mech. 1993, 33, 123–132. [Google Scholar] [CrossRef]
 Bay, B.K.; Smith, T.S.; Fyhrie, D.P.; Saad, M. Digital volume correlation: Threedimensional strain mapping using xray tomography. Exp. Mech. 1999, 39, 217–226. [Google Scholar] [CrossRef]
 Pan, B. Digital image correlation for surface deformation measurement: Historical developments, recent advances and future goals. Meas. Sci. Technol. 2018, 29, 082001. [Google Scholar] [CrossRef]
 Peters, W.H.; Ranson, W.F.; Sutton, M.A.; Chu, T.C.; Anderson, J. Application of digital correlation methods to rigid body mechanics. Opt. Eng. 1983, 22, 738–742. [Google Scholar] [CrossRef]
 Lu, H.; Cary, P.D. Deformation measurements by digital image correlation: Implementation of a secondorder displacement gradient. Exp. Mech. 2000, 40, 393–400. [Google Scholar] [CrossRef]
 Tong, W. An evaluation of digital image correlation criteria for strain mapping applications. Strain 2005, 41, 167–175. [Google Scholar] [CrossRef]
 Pan, B. Bias error reduction of digital image correlation using gaussian prefiltering. Opt. Lasers Eng. 2013, 51, 1161–1167. [Google Scholar] [CrossRef]
 Baker, S.; Matthews, I. Lucaskanade 20 years on: A unifying framework. Int. J. Comput. Vis. 2004, 56, 221–255. [Google Scholar] [CrossRef]
 Pan, B.; Wang, B. Digital image correlation with enhanced accuracy and efficiency: A comparison of two subpixel registration algorithms. Exp. Mech. 2016, 56, 1395–1409. [Google Scholar] [CrossRef]
 Pan, B.; Li, K.; Tong, W. Fast, robust and accurate digital image correlation calculation without redundant computations. Exp. Mech. 2013, 53, 1277–1289. [Google Scholar] [CrossRef]
 Gao, Y.; Cheng, T.; Su, Y.; Xu, X.; Zhang, Y.; Zhang, Q. Highefficiency and highaccuracy digital image correlation for threedimensional measurement. Opt. Lasers Eng. 2015, 65, 73–80. [Google Scholar] [CrossRef]
 Solav, D.; Moerman, K.M.; Jaeger, A.M.; Genovese, K.; Herr, H.M. MultiDIC: An opensource toolbox for multiview 3D digital image correlation. IEEE Access 2018, 6, 30520–30535. [Google Scholar] [CrossRef]
 Blaber, J.; Adair, B.; Antoniou, A. Ncorr: Opensource 2D digital image correlation matlab software. Exp. Mech. 2015, 55, 1105–1122. [Google Scholar] [CrossRef]
 Jones, E.M.C.; Iadicola, M.A. A good practices guide for digital image correlation. Int. Digit. Image Correl. Soc. 2018, 10. [Google Scholar] [CrossRef]
 Atkinson, D.; Becker, T. A 117 line 2D digital image correlation code written in matlab. Remote Sens. 2020, 12, 2906. [Google Scholar] [CrossRef]
 Sutton, M.A.; Yan, J.H.; Tiwari, V.; Schreier, H.W.; Orteu, J.J. The effect of outofplane motion on 2D and 3D digital image correlation measurements. Opt. Lasers Eng. 2008, 46, 746–757. [Google Scholar] [CrossRef] [Green Version]
 Reu, P.; Wattrisse, B.; Wang, W.; Robert, L.; Bruck, H.; Daly, S.; RodriguezVera, R.; Bugarin, F. Challenge Dataset: 3DDIC Web Page. Available online: https://sem.org/content.asp?contentid=198 (accessed on 15 October 2020).
 Turner, D.Z. Digital Image Correlation Engine (DICe) Reference Manual. Available online: http://dicengine.github.io/dice/index.html (accessed on 20 July 2020).
 Bloomenthal, J.; Rokne, J. Homogeneous coordinates. Vis. Comput. 1994, 11, 15–26. [Google Scholar] [CrossRef]
 Bothe, T.; Li, W.; Schulte, M.; Von Kopylow, C.; Bergmann, R.B.; Jüptner, W.P.O. Vision ray calibration for the quantitative geometric description of general imaging and projection optics in metrology. Appl. Opt. 2010, 49, 5851–5860. [Google Scholar] [CrossRef]
 Dunne, A.K.; Mallon, J.; Whelan, P.F. Efficient generic calibration method for general cameras with single center of projection. Comput. Vis. Image Underst. 2010, 114, 220–233. [Google Scholar] [CrossRef] [Green Version]
 Wei, G.Q.; De Ma, S. Implicit and explicit camera calibration: Theory and experiments. IEEE Trans. Pattern Anal. Mach. Intell. 1994, 16, 469–480. [Google Scholar] [CrossRef]
 Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef] [Green Version]
 Balcaen, R.; Wittevrongel, L.; Reu, P.L.; Lava, P.; Debruyne, D. StereoDIC calibration and speckle image generator based on FE formulations. Exp. Mech. 2017, 57, 703–718. [Google Scholar] [CrossRef]
 Hou, H.S.; Andrews, H.C. Cubic splines for image interpolation and digital filtering. IEEE Trans. Acoust. 1978, 26, 508–517. [Google Scholar] [CrossRef]
 Pan, B.; Xie, H.; Wang, Z. Equivalence of digital image correlation criteria for pattern matching. Appl. Opt. 2010, 49, 5501–5509. [Google Scholar] [CrossRef] [PubMed] [Green Version]
 Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision, 2nd ed.; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
 Hartley, R.I.; Sturm, P. Triangulation. Comput. Vis. Image Underst. 1997, 68, 146–157. [Google Scholar] [CrossRef]
 Foroosh, H.; Zerubia, J.B.; Berthod, M. Extension of phase correlation to subpixel registration. IEEE Trans. Image Process. 2002, 11, 188–200. [Google Scholar] [CrossRef] [Green Version]
 Reu, P. Stereorig design: Stereoangle selection—Part 4. Exp. Tech. 2013, 37, 1–2. [Google Scholar] [CrossRef]
 Garcia, D.; Orteu, J.J.; Penazzi, L. A combined temporal tracking and stereocorrelation technique for accurate measurement of 3D displacements: Application to sheet metal forming. J. Mater. Process. Technol. 2002, 125, 736–742. [Google Scholar] [CrossRef] [Green Version]
 Schreier, H.; Orteu, J.J.; Sutton, M.A. Image Correlation for Shape, Motion and Deformation Measurements: Basic Concepts, Theory and Applications; Springer: New York, NY, USA, 2009; ISBN 9780387787466. [Google Scholar]
 Zhou, Y.; Pan, B.; Chen, Y.Q. Large deformation measurement using digital image correlation: A fully automated approach. Appl. Opt. 2012, 51, 6674–7683. [Google Scholar] [CrossRef]
 Lowe, D.G. Distinctive image features from scaleinvariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
 Torr, P.H.S.; Zisserman, A. MLESAC: A new robust estimator with application to estimating image geometry. Comput. Vis. Image Underst. 2000, 78, 138–156. [Google Scholar] [CrossRef] [Green Version]
 Żołądek, H. The topological proof of abelruffini theorem. Topol. Methods Nonlinear Anal. 2000, 16, 253–265. [Google Scholar] [CrossRef]
 Zhou, Y.; Sun, C.; Chen, J. Adaptive subset offset for systematic error reduction in incremental digital image correlation. Opt. Lasers Eng. 2014, 55, 5–11. [Google Scholar] [CrossRef]
 Lourakis, M. Stereo Triangulation. Available online: https://www.mathworks.com/matlabcentral/fileexchange/67383stereotriangulation (accessed on 21 April 2020).
 Turner, D.Z. An overview of the gradientbased local DIC formulation for motion estimation in DICe. Sandia Rep. 2016, 1–6. [Google Scholar] [CrossRef]
 Lehoucq, R.B.; Reu, P.L.; Turner, D.Z. The effect of the illposed problem on quantitative error assessment in digital image correlation. Exp. Mech. 2017, 61, 609–621. [Google Scholar] [CrossRef]
 Grubbs, F.E. Procedures for detecting outlying observations in samples. Technometrics 1969, 11, 1–21. [Google Scholar] [CrossRef]
 Furukawa, Y.; Ponce, J. Accurate camera calibration from multiview stereo and bundle adjustment. Int. J. Comput. Vis. 2009, 84, 257–268. [Google Scholar] [CrossRef]
 Vo, M. Advanced geometric camera calibration for machine vision. Opt. Eng. 2011, 50, 110503. [Google Scholar] [CrossRef] [Green Version]
 Pan, B.; Dafang, W.; Yong, X. Incremental calculation for large deformation measurement using reliabilityguided digital image correlation. Opt. Lasers Eng. 2012, 50, 586–592. [Google Scholar] [CrossRef]
 Bornert, M.; Brémand, F.; Doumalin, P.; Dupré, J.C.; Fazzini, M.; Grédiac, M.; Hild, F.; Mistou, S.; Molimard, J.; Orteu, J.J.; et al. Assessment of digital image correlation measurement errors: Methodology and results. Exp. Mech. 2009, 49, 353–370. [Google Scholar] [CrossRef] [Green Version]
 Pan, B. Reliabilityguided digital image correlation for image deformation measurement. Appl. Opt. 2009, 48, 1535–1542. [Google Scholar] [CrossRef]
 Thiruselvam, N.I.; Subramanian, S.J. Featureassisted stereo correlation. Strain 2019, 55, e12315. [Google Scholar] [CrossRef]
 Weinzaepfel, P.; Revaud, J.; Harchaoui, Z.; Schmid, C. DeepFlow: Large displacement optical flow with deep matching. Proc. IEEE Int. Conf. Comput. Vis. 2013, 1385–1392. [Google Scholar] [CrossRef] [Green Version]
Variable  Variable Description 

FileNames1  Cell array of character vectors containing the image file names of the first image series. All images need to be the same size. 
FileNames2  Cell array of character vectors containing the image file names of the second image series. All images need to be the same size. 
Mask  Logical matrix, which is the same size as the images, indicating which pixels should not be analyzed during correlation. 
GaussFilt  Defines the standard deviation and window size for the Gaussian filter in pixels as [FiltSigma, FiltSize] respectively where {FiltSigma ∈ ℝ^{+}FiltSigma > 0} and {FiltSize ∈ ℕ}. 
StepSize  Step size in pixels {StepSize ∈ ℕ}. 
SubSize  Subset size in pixels {SubSize = 2k + 1k ∈ ℕ}. 
SubShape  Subset shape {SubShape ∈ 'Square','Circle'}. 
SFOrder  Dictates the SF order {SFOrder ∈ ℤ0 ≤ SFOrder ≤ 2}. 
RefStrat  Logical statement dictating reference image strategy (Section 2.6.1). 
StopCritVal  Defines the stopping criterion value {StopCritVal ∈ ℝStopCritVal > 0}. 
WorldCTs  Location of CTs in the world CS defined according to MATLAB’s estimateCameraParameters function. 
ImgCTs  Location of CTs in the distorted sensor CSs defined according to MATLAB’s estimateCameraParameters function. 
Variable  Variable Description 

P(b,q)  SFPs (b = 1 for $u$ and b = 7 for $v$). 
C(q)  ZNCC coefficient. 
Iter(q)  Number of iterations until stopping criterion is satisfied (maximum of 100 iterations). 
StopVal(q)  Final stopping criterion value for subset q. 
Variable  Variable Description 

ImgName  Deformed image name. 
ImgSize(b)  Image size (b = 1 for rows and b = 2 for columns). 
ImgFilt(b)  Standard deviation (b = 1) and window size (b = 2) for the Gaussian filter respectively in pixels. 
SubSize(q)  Subset size in pixels. 
SubShape(q,:)  Subset shape. 
SFOrder(q)  SF order. 
Xos(b,q)  Reference subset position in the distorted sensor CS of the relevant camera (b = 1 for ${x}^{o}$ and b = 2 for ${y}^{o}$). 
P(b,q)  SFPs (b = 1 for $u$ and b = 7 for $v$). 
C(q)  ZNCC coefficient. 
Iter(q)  Number of iterations until stopping criterion is satisfied (maximum of 100 iterations). 
StopVal(q)  Final stopping criterion value. 
Variable  Variable Description 

Xow(b,q)  Reference subset position in the world CS (b = 1 for ${\widehat{x}}_{w}^{o}$, b = 2 for ${\widehat{y}}_{w}^{o}$, and b = 3 for ${\widehat{z}}_{w}^{o}$). 
Uw(b,q)  Displacement in the world CS (b = 1 for ${\widehat{u}}_{w}$, b = 2 for ${\widehat{v}}_{w},$ and b = 3 for ${\widehat{w}}_{w}$). 
CamParams  Calibration parameters (extrinsic, intrinsic, and radial distortion parameters). 
Line Numbers  Task Performed 

Lines 2–3  Compute image names of both image series; 
Lines 4–5  Compute number of images and size of the first image in first image series; 
Lines 6–7  Create regularly spaced reference subset positions, Xos; 
Line 8  Remove subsets containing invalid pixels which are defined by Mask; 
Line 9–12  Preassign ResultData structure array; 
Line 13  Call subroutine StereoMatch to perform stereo matching; 
Line 15  Call subroutine ImgCorr to perform temporal matching of first image series; 
Line 17  Call subroutine ImgCorr to perform temporal matching of second image series; 
Line 18  Call subroutine CSTrans to perform displacement transformation from the distorted sensor CSs to the world CS; 
Line Numbers  Task Performed 

Line 2  switch SubShape; 
Line 3  caseSubShape = 'Square', do 
Line 4–6  Extract ${f}_{i}$ and $\nabla {\mathit{f}}_{i}$ using SubExtract; 
Line 7  Compute $\Delta {\mathit{x}}_{i}$ using SubSize; 
Line 8  caseSubShape = 'Circle', do 
Line 9–11  Extract ${f}_{i}$ and $\nabla {\mathit{f}}_{i}$ using SubExtract; 
Line 12  Compute $\Delta {\mathit{x}}_{i}$ using SubSize; 
Line 13  Determine mask of elements that fall within the circular subset; 
Line 14–16  Use mask to extract appropriate data for circular subset; 
Line 17  end switch 
Line Numbers  Task Performed 

Line 2  switch SFOrder 
Line 3–8  caseSFOrder = 0, do assign functions for zeroorder SF; 
Line 9–14  case SFOrder = 1, do assign functions for firstorder SF; 
Line 15–20  case SFOrder = 2, do assign functions for secondorder SF; 
Line 21  end switch 
Line Numbers  Task Performed 

Line 2  Call SFExpressions to assign equations dependent on the SF order; 
Line 3  Compute $\nabla {\mathit{f}}_{i}\frac{\partial {\mathit{W}}_{i}}{\partial \mathit{P}}$; 
Line 4  Compute ${\mathit{H}}^{\mathbf{1}}$, Equation (19); 
Line 5  Compute $\overline{f}$ and $\tilde{f}$; 
Line 6  Initialise flag ← 0, Iter ← 1 and $\Delta \mathit{P}\leftarrow 1$; 
Line 7  whileflag = 0, do 
Line 8  Compute $\Delta {\mathit{x}}_{i}^{\prime},$ Equation (7), using estimates of $\mathit{P}$; 
Line 9  Compute $\mathit{g}$ using interpolation coefficients; 
Line 10  Compute $\overline{g}$ and $\tilde{g}$; 
Line 11  Compute ‖$\mathsf{\Delta}\mathit{P}$‖ using Equation (23); 
Line 12  if ‖$\mathsf{\Delta}\mathit{P}$‖ < StopCritVal or Iter ≥ 100, do 
Line 13  Set flag ← 1; 
Line 14  Compute C, Equation (12) substituted into Equation (15); 
Line 15  else, do 
Line 16  Compute $\mathit{J}$, Summation expression of Equation (18); 
Line 17  Compute $\Delta \mathit{P}$, Equation (18); 
Line 18  Update $\mathit{P}$, Equation (21); 
Line 19  end if 
Line 20  Set Iter ← Iter + 1; 
Line 21  end while 
Line Numbers  Task Performed 

Line 2  Define SubExtract function to extract square subset data; 
Line 3–6  Load the FIS1 and FIS2 and perform Gaussian filtering on them; 
Line 7  Compute interpolation coefficients of G using MATLAB’s griddedInterpolant function; 
Line 9  Call subroutine FeatureMatch to determine SFP initial estimates; 
Line 10  Initialise temporary storage variables used to save correlation information during the for loop; 
Line 11  for subset number q = 1 to number of subsets, do 
Line 12  if FeatureMatch determined SFP initial estimates for subset q, do 
Line 13  Call subroutine SubShapeExtract; 
Line 14  Call subroutine SubCorr; 
Line 15  end if 
Line 16  end for 
Line 17  Save correlation information to RD.Stereo variable; 
Line 18  for image number d = 1 to d = n, do 
Line 19  Compute the subset positions in the FIS2 using Equation (32); 
Line 20  end for 
Line 21–22  Determine subsets which fail stereo matching and subsets which pass; 
Line 23–26  Set the subset position of subsets which fail stereo matching to NaNs throughout both image series; 
Line 27  Display information for stereo matching; 
Line Numbers  Task Performed 

Line 2  if VLFeat library is not setup, then return an error; 
Line 3–4  Compute keypoint locations and descriptors for F and G using vl_sift; 
Line 5  Determine vector KptsInVacinity identifying the keypoints of F which fall within the perimeter of the square subsets, equivalent in size to that specified for the subset under consideration, of F; 
Line 6  Eliminate keypoints (and their associated descriptors) of F which do not fall within the perimeter of any of the subsets; 
Line 7–8  Determine matching keypoints using vl_ubcmatch; 
Line 9  Determine the 20 nearest keypoints for each subset stored in matrix relevantKpts; 
Line 10–11  Define anonymous functions RansacModel and RansacError for determining affine transformation parameters (of Equation (29)) and evaluating Equation (30) respectively; 
Line 12  Initialise P as NaNs; 
Line 13  for subset number q = 1 to number of subsets, do 
Line 14  try 
Line 15  Use MSAC to determine affine transformation parameters for subset q from its relevant keypoints; 
Line 16  Convert affine transformation parameters to secondorder SFPs using Equation (31); 
Line 17  end try 
Line 18  end for 
Line 19  Display information for SIFT feature matching; 
Line Numbers  Task Performed 

Line 2  if subset positions do not contain NaNs, do 
Line 3  Compute normalized crosspower spectrum in the frequency domain; 
Line 4  Convert back to spatial domain; 
Line 5  Find index of the maximum correlation coefficient; 
Line 6  Compute index vector which relates indices of the correlation coefficient matrix to the displacements they correspond to; 
Line 7–8  Obtain displacements using index of the maximum correlation coefficient; 
Line 9  else, do 
Line 10  Set output displacements to NaN; 
Line 11  end if 
Line Numbers  Task Performed 

Line 2  Define SubExtract function to extract square subset data; 
Line 3  for image number d = 2 to d = n, do 
Line 4  Define $G$; 
Line 5  Perform Gaussian filtering on $G$ using MATLAB’s imgaussfilt function; 
Line 6  Compute interpolation coefficients of $G$ using MATLAB’s griddedInterpolant function; 
Line 7  if first image of correlation run or RefStrat = 1, do 
Line 8  Define $F$; 
Line 9  Perform Gaussian filtering on $F$ using MATLAB’s imgaussfilt function; 
Line 10  Compute gradients for $F$ (compute $\nabla F$); 
Lines 11–12  Displace Xos with previous image correlation run displacement SFPs (incremental strategy); 
Line 13  Call subroutine PCM to compute initial estimates of displacement SFPs; 
Line 14  else, do 
Line 15  Set P(d) ← P(d1); 
Line 16  end if 
Line 17  Initialise temporary storage variables used to save correlation information during the inner loop; 
Line 18  for subset number q = 1 to number of subsets, do 
Line 19  if subset q passed stereo matching, do 
Line 20  Call subroutine SubShapeExtract; 
Line 21  Call subroutine SubCorr; 
Line 22  end if 
Line 23  end for 
Line 24  Save correlation information to PD variable; 
Line 25–26  Display results for image d correlation; 
Line 27  end for 
Line Numbers  Task Performed 

Line 2  Compute calibration parameters using MATLAB’s estimateCameraParameters function; 
Line 3–4  Compute canonical projection matrices of first and second cameras; 
Line 5  Load the fundamental matrix; 
Line 6  for image number d = 1 to d = n, do 
Line 7–8  Compute ${\mathit{x}}_{{s}_{1}}^{d}$ and ${\mathit{x}}_{{s}_{2}}^{d}$, Equation (49); 
Line 9  Determine indices of subsets which do not contain NaNs in either ${\mathit{x}}_{{s}_{1}}^{d}$ or ${\mathit{x}}_{{s}_{2}}^{d}$; 
Line 10  if first image pair of image set or RefStrat = 1, do 
Line 11  Compute ${\widehat{\mathit{x}}}_{w}^{o}$ (for subsets identified in line 9) using the Triangulation subroutine and MATLAB’s undistortPoints function; 
Line 12  else 
Line 13  Assign ${\widehat{\mathit{x}}}_{w}^{o}$ of previous image pair to current image pair; 
Line 14  end if 
Line 15  Compute ${\widehat{\mathit{x}}}_{w}^{d}$ (for subsets identified in line 9), using the Triangulation subroutine and MATLAB’s undistortPoints function, and subtract ${\widehat{\mathit{x}}}_{w}^{o}$ from it to determine ${\widehat{u}}_{w}$, ${\widehat{v}}_{w}$ and ${\widehat{w}}_{w}$ using Equation (51); 
Line 16  Save calibration parameters; 
Line 18  end for 
Line Numbers  Task Performed 

Line 2  for subset pair number q = 1 to number of subsets, do 
Lines 3–4  Determine translation matrices of Equation (35); 
Line 5  Apply translation matrices to fundamental matrix according to Equation (36); 
Lines 6–8  Compute epipoles and normalize them; 
Lines 9–10  Compute rotation matrices according to Equation (37); 
Line 11  Apply rotation matrices to fundamental matrix according to Equation (38); 
Lines 12–13  Compute polynomial coefficients of Equation (43); 
Lines 14–15  Compute real roots of the polynomial; 
Line 16  Evaluate polynomial of Equation (42) at these roots; 
Line 17  Determine root corresponding to minimum of Equation (42); 
Lines 18–20  Compute ideal subset positions according to Equations (44) and (45); 
Lines 21–22  Determine 3D position of noiseless subset positions using linear triangulation method; 
Line 23  end for 
Step  ADIC3D  DICe  Strain Master  

Bias  Variance  Bias  Variance  Bias  Variance  
3  47.2  9.17  37.1  23  55.5  14.1 
7  44.2  7.65  41.1  21.6  50.2  13 
15  61.6  13.4  57.1  33.5  77.4  20.1 
Step  ADIC3D  DICe  Strain Master  

Bias  Variance  Bias  Variance  Bias  Variance  
3  1.55  2.48  5.14  4.59  11.3  5.89 
7  1.4  2.16  4.48  4.4  3.07  6.47 
15  2.35  3.52  9.92  6.86  7.03  21.2 
Image Pair  250  500  750  1000  

Metric  Mean  Max  Mean  Max  Mean  Max  Mean  Max 
${\widehat{u}}_{w}$  0.0342  0.131  0.0354  0.113  0.0373  0.118  0.0374  0.122 
${\widehat{v}}_{w}$  0.374  2.19  0.6  2.57  0.875  2.97  1.04  3.84 
${\widehat{w}}_{w}$  0.093  0.443  0.193  0.744  0.291  0.926  0.345  1.02 
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. 
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Atkinson, D.; Becker, T.H. Stereo Digital Image Correlation in MATLAB. Appl. Sci. 2021, 11, 4904. https://doi.org/10.3390/app11114904
Atkinson D, Becker TH. Stereo Digital Image Correlation in MATLAB. Applied Sciences. 2021; 11(11):4904. https://doi.org/10.3390/app11114904
Chicago/Turabian StyleAtkinson, Devan, and Thorsten Hermann Becker. 2021. "Stereo Digital Image Correlation in MATLAB" Applied Sciences 11, no. 11: 4904. https://doi.org/10.3390/app11114904