# MoReLab: A Software for User-Assisted 3D Reconstruction

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Related Work

- A graphical user interface for the user to add feature points and correspondences manually to model featureless videos;
- Several primitive shapes to model the most common industrial components.

## 3. Method

#### 3.1. Graphical User Interface

#### 3.2. Pipeline

#### 3.2.1. Manual Feature Extraction

#### 3.2.2. Extract Keyframes

#### 3.2.3. Bundle Adjustment

#### 3.2.4. Primitive Tools

**Rectangle Tool:**This tool allows the user to model planar surfaces. To estimate a rectangle, the user should click on four features in an anti-clockwise manner. The 3D sparse points, corresponding to four selected features, compute new vertices to form a rectangle in which all inner angles are constrained to be 90 degrees.**Quadrilateral Tool:**This tool allows creating a quadrilateral using four 2D features. This tool connects 3D sparse points corresponding to selected 2D features. Unlike the rectangle tool, there is no 90-degree angle constraint and opposite sides might not be parallel. Hence, inner angles might not be 90 degrees. If the selected four points are not in a single plane, a quadrilateral is also not planar.**Center Cylinder Tool:**This tool models a cylindrical object using a specific point. This is useful when the center of the base of cylindrical equipment is visible. The user needs to click on four points. The point can be either a 2D feature or an area containing a 3D primitive. For 2D features, we get the corresponding 3D sparse point computed from bundle adjustment. The initial three points form the base of the cylinder, and the fourth point determines the height of the cylinder. The first point corresponds to the center of the base, the second point forms an axis point, and the third point corresponds to the radius of the cylinder.A cylinder is estimated by computing new axes. Let us denote input 3D point data points as $\mathbf{P}\mathbf{1}$, $\mathbf{P}\mathbf{2}$, $\mathbf{P}\mathbf{3}$, and $\mathbf{P}\mathbf{4}$. We define a reference system:$$\mathbf{T}=\frac{\mathbf{P}\mathbf{2}-\mathbf{P}\mathbf{1}}{\parallel \mathbf{P}\mathbf{2}-\mathbf{P}\mathbf{1}\parallel}\phantom{\rule{1.em}{0ex}}\mathbf{b}=\mathbf{P}\mathbf{3}-\mathbf{P}\mathbf{1}\phantom{\rule{1.em}{0ex}}\mathbf{N}=\frac{\mathbf{T}\times \mathbf{b}}{\parallel \mathbf{T}\times \mathbf{b}\parallel}\phantom{\rule{1.em}{0ex}}\mathbf{B}=\mathbf{T}\times \mathbf{N},$$**Base Cylinder Tool:**This tool allows users to create a cylinder in which the initial three selected points lie on the base of the cylinder. The fourth point determines the height of the cylinder. This is useful for most industrial scenarios because, in most cases, we can only see the surface of the cylindrical equipment, and the base center is not visible. As in other tools, the user needs to select the points by clicking on them. The point can be either a 2D feature or an area containing a 3D primitive. For 2D features, we get the corresponding 3D sparse point computed from bundle adjustment.Similar to the center cylinder tool, first, we need to calculate a new local axes system, i.e., $\mathbf{T}$, $\mathbf{B}$, and $\mathbf{N}$ similar to how these axes were calculated in the center cylinder tool. In the new local system, the first point is considered to be at the origin; while the second and third 3D points are projected on $\mathbf{B}$ and $\mathbf{T}$ to obtain their 2D locations in the plane formed by $\mathbf{B}$ and $\mathbf{T}$. Given these three 2D points, we find the circle passing through these three points. If three points are in a straight line, the circle would not be estimated because it would have an infinite radius. Once we know the center and radius of this circle, we calculate the base and top points, similar to the center cylinder tool.**Curved Cylinder Tool:**This tool models curved pipes and curved cylindrical equipment. The user clicks on four points at any part of the image. Then, the user clicks on a sparse 3D point obtained from bundle adjustment, this last point assigns an approximate depth to the curve just defined. To do this, first, we estimate the plane containing this 3D point, denoted as $\mathbf{P}$. Typically, a plane is defined as:$$ax+by+cz+d=0,$$$$M=\left[\begin{array}{c}{M}_{1}\\ {M}_{2}\\ {M}_{3}\\ {M}_{4}\end{array}\right]\phantom{\rule{2.em}{0ex}}x=\frac{{M}_{1}\mathbf{X}}{{M}_{3}\mathbf{X}}\phantom{\rule{2.em}{0ex}}y=\frac{{M}_{2}\mathbf{X}}{{M}_{3}\mathbf{X}}\phantom{\rule{2.em}{0ex}}\left[a\phantom{\rule{1.em}{0ex}}b\phantom{\rule{1.em}{0ex}}c\right]\xb7\mathbf{X}+d=0.$$Equation (4) can be re-arranged into the form of linear equation A$\mathbf{X}$ = $\mathbf{b}$ and a linear solver finds $\mathbf{X}$. Through this procedure, four 3D points are obtained corresponding to the clicked points on the frame. These four 3D points act as control points to estimate a Bézier curve [25] on the frame. Similarly, the user can define the same curve from a different viewpoint. These curves defined at different viewpoints are optimized to obtain the final curve in 3D space. This optimization is about minimizing the sum of the Euclidean distance between control points across frames and the Euclidean distance between the location of the projected point and the location of the 2D feature in each frame containing the curve.Assume that m frames contain curves. Let ${\mathbf{x}}_{ij}$ denote the i-th feature location on the j-th image, ${\mathbf{CP}}_{ij}$ denotes i-th control point on the j-th frame. ${\mathbf{X}}_{i}$ denotes corresponding i-th 3D point, and ${\mathbf{C}}_{j}$ denotes camera parameters corresponding to j-th image, then the objective function for optimization of curves is defined as:$$arg\underset{{\mathbf{CP}}_{ij}}{min}\phantom{\rule{1.em}{0ex}}\sum _{j=1}^{m-1}C{P}_{j}-C{P}_{j+1}+\sum _{j=1}^{m}\sum _{i=1}^{4}d\left(\right)open="("\; close=")">f({\mathbf{CP}}_{ij},{\mathbf{C}}_{j}),{\mathbf{x}}_{ij}$$The optimal control points, obtained from optimization, estimate the final Bézier curve and the cylinder needs to be built around this curve. In order to define the radius of this curved cylinder, the user clicks on a 3D point, and a series of cylinders are computed around the final curve.

#### 3.2.5. Calibration and Measurements

## 4. Experiments and Results

#### 4.1. Cuboid Modeling

#### 4.2. Jet Pump Beam Modeling

#### 4.3. Cylinder Modeling

#### 4.4. Curved Pipe Modeling

#### 4.5. Additional Experiments

#### 4.6. Discussion

#### 4.7. Measurement Results

#### 4.7.1. 1-Measurement Calibration

#### 4.7.2. Three-Measurement Calibration

#### 4.7.3. Limitations

## 5. Conclusions

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Conflicts of Interest

## Abbreviations

SfM | Structure from Motion |

MoReLab | Movie Reconstruction Laboratory |

## References

- Vacca, G. 3D Survey with Apple LiDAR Sensor —Test and Assessment for Architectural and Cultural Heritage. Heritage
**2023**, 6, 1476–1501. [Google Scholar] [CrossRef] - Rocchini, C.; Cignoni, P.; Montani, C.; Pingi, P.; Scopigno, R. A low cost 3D scanner based on structured light. In Computer Graphics Forum; Wiley Online Library: Hoboken, NJ, USA, 2001; Volume 20, pp. 299–308. [Google Scholar]
- Pollefeys, M.; Koch, R.; Vergauwen, M.; Van Gool, L. Hand-Held Acquisition of 3D Models with a Video Camera. In Proceedings of the Second International Conference on 3-D Digital Imaging and Modeling, Ottawa, ON, Canada, 8 October 1999; pp. 14–23. [Google Scholar] [CrossRef] [Green Version]
- Schönberger, J.L.; Frahm, J.M. Structure-from-Motion Revisited. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Rupnik, E.; Daakir, M.; Pierrot Deseilligny, M. MicMac—A free, open-source solution for photogrammetry. Open Geospat. Data Softw. Stand.
**2017**, 2, 14. [Google Scholar] [CrossRef] [Green Version] - Cernea, D. OpenMVS: Multi-View Stereo Reconstruction Library. City
**2020**, 5, 7. [Google Scholar] - Triggs, B.; McLauchlan, P.F.; Hartley, R.I.; Fitzgibbon, A.W. Bundle adjustment—A modern synthesis. In Proceedings of the International Workshop on Vision Algorithms, Corfu, Greece, 21–22 September 1999; Springer: Berlin/Heidelberg, Germany, 1999; pp. 298–372. [Google Scholar]
- Van Den Hengel, A.; Dick, A.; Thormählen, T.; Ward, B.; Torr, P.H. Videotrace: Rapid interactive scene modelling from video. ACM Trans. Graph. ToG
**2007**, 26, 86-es. [Google Scholar] [CrossRef] - Sinha, S.N.; Steedly, D.; Szeliski, R.; Agrawala, M.; Pollefeys, M. Interactive 3D architectural modeling from unordered photo collections. ACM Trans. Graph. TOG
**2008**, 27, 1–10. [Google Scholar] [CrossRef] - Xu, M.; Li, M.; Xu, W.; Deng, Z.; Yang, Y.; Zhou, K. Interactive mechanism modeling from multi-view images. ACM Trans. Graph. TOG
**2016**, 35, 1–13. [Google Scholar] [CrossRef] - Rasmuson, S.; Sintorn, E.; Assarsson, U. User-guided 3D reconstruction using multi-view stereo. In Proceedings of the Symposium on Interactive 3D Graphics and Games, San Francisco, CA, USA, 5–7 May 2020; pp. 1–9. [Google Scholar]
- Habbecke, M.; Kobbelt, L. An Intuitive Interface for Interactive High Quality Image-Based Modeling. In Computer Graphics Forum; Wiley Online Library: Hoboken, NJ, USA, 2009; Volume 28, pp. 1765–1772. [Google Scholar]
- Baldacci, A.; Bernabei, D.; Corsini, M.; Ganovelli, F.; Scopigno, R. 3D reconstruction for featureless scenes with curvature hints. Vis. Comput.
**2016**, 32, 1605–1620. [Google Scholar] [CrossRef] - Doron, Y.; Campbell, N.D.; Starck, J.; Kautz, J. User directed multi-view-stereo. In Proceedings of the Computer Vision-ACCV 2014 Workshops, Singapore, 1–2 November 2014; Revised Selected Papers, Part II 12. Springer: Berlin/Heidelberg, Germany, 2015; pp. 299–313. [Google Scholar]
- Töppe, E.; Oswald, M.R.; Cremers, D.; Rother, C. Image-based 3d modeling via cheeger sets. In Proceedings of the Computer Vision—ACCV 2010: 10th Asian Conference on Computer Vision, Queenstown, New Zealand, 8–12 November 2010; Revised Selected Papers, Part I 10. Springer: Berlin/Heidelberg, Germany, 2011; pp. 53–64. [Google Scholar]
- Chen, T.; Zhu, Z.; Shamir, A.; Hu, S.M.; Cohen-Or, D. 3-sweep: Extracting editable objects from a single photo. ACM Trans. Graph. TOG
**2013**, 32, 1–10. [Google Scholar] [CrossRef] - Mildenhall, B.; Srinivasan, P.P.; Tancik, M.; Barron, J.T.; Ramamoorthi, R.; Ng, R. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. Commun. ACM
**2021**, 65, 99–106. [Google Scholar] [CrossRef] - Tewari, A.; Thies, J.; Mildenhall, B.; Srinivasan, P.P.; Tretschk, E.; Wang, Y.; Lassner, C.; Sitzmann, V.; Martin-Brualla, R.; Lombardi, S.; et al. Advances in Neural Rendering. Comput. Graph. Forum
**2022**, 41, 703–735. [Google Scholar] [CrossRef] - Chan, E.R.; Monteiro, M.; Kellnhofer, P.; Wu, J.; Wetzstein, G. pi-gan: Periodic implicit generative adversarial networks for 3d-aware image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 5799–5809. [Google Scholar]
- Tu, Z.; Huang, Z.; Chen, Y.; Kang, D.; Bao, L.; Yang, B.; Yuan, J. Consistent 3d hand reconstruction in video via self-supervised learning. IEEE Trans. Pattern Anal. Mach. Intell.
**2023**, 45, 9469–9485. [Google Scholar] [CrossRef] [PubMed] - Longuet-Higgins, H.C. A computer algorithm for reconstructing a scene from two projections. Nature
**1981**, 293, 133–135. [Google Scholar] [CrossRef] - Banterle, F.; Gong, R.; Corsini, M.; Ganovelli, F.; Gool, L.V.; Cignoni, P. A Deep Learning Method for Frame Selection in Videos for Structure from Motion Pipelines. In Proceedings of the 2021 IEEE International Conference on Image Processing (ICIP), Anchorage, AK, USA, 19–22 September 2021; pp. 3667–3671. [Google Scholar] [CrossRef]
- Nocerino, E.; Lago, F.; Morabito, D.; Remondino, F.; Porzi, L.; Poiesi, F.; Rota Bulo, S.; Chippendale, P.; Locher, A.; Havlena, M.; et al. A Smartphone-Based 3D Pipeline for the Creative Industry the Replicate Eu Project. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci.
**2017**, XLII-2/W3, 535–541. [Google Scholar] [CrossRef] [Green Version] - Branch, M.A.; Coleman, T.F.; Li, Y. A Subspace, Interior, and Conjugate Gradient Method for Large-Scale Bound-Constrained Minimization Problems. SIAM J. Sci. Comput.
**1999**, 21, 1–23. [Google Scholar] [CrossRef] - Gordon, W.J.; Riesenfeld, R.F. Bernstein-BéZier Methods for the Computer-Aided Design of Free-Form Curves and Surfaces. J. ACM
**1974**, 21, 293–310. [Google Scholar] [CrossRef] - Cignoni, P.; Callieri, M.; Corsini, M.; Dellepiane, M.; Ganovelli, F.; Ranzuglia, G. Meshlab: An open-source mesh processing tool. In Proceedings of the Eurographics Italian Chapter Conference, Salerno, Italy, 2–4 July 2008; Volume 2008, pp. 129–136. [Google Scholar]

**Figure 1.**Examples of frames from videos captured in industrial environments. These videos are not suitable for automatic SfM tools due to issues such as low resolution, aggressive compression, strong and moving directional lighting (e.g., a torchlight mounted on the camera), motion blur, featureless surfaces, liquid turbulence, low lighting, etc.

**Figure 2.**The graphical user interface of MoReLab. The toolbar at the top allows the user to switch between different tools.

**Figure 4.**Modeling a cuboid with Metashape, 3-Sweep, and MoReLab: (

**a**) A frame of input video; (

**b**) Cuboid modeling with Metashape; (

**c**) Paint strokes snapped to cuboid outline; (

**d**) Cuboid modeling with 3-Sweep; (

**e**) Modeling with rectangle tool; (

**f**) MeshLab visualization of estimated surfaces of the cuboid.

**Figure 5.**Jet pump beam is modeled with tested software programs under consideration: (

**a**) Metashape reconstruction output; (

**b**) Another view of (

**a**); (

**c**) Paint strokes snapped to jet pump beam outline; (

**d**) Output obtained by modeling jet pump beam with 3-Sweep; (

**e**) Estimation of jet pump beam surface using quadrilateral tool in MoReLab; (

**f**) Output obtained by modeling jet pump beam with MoReLab.

**Figure 6.**An example of modeling a cylinder with Metashape, 3-Sweep, and MoReLab: (

**a**) A frame of input video; (

**b**) MeshLab visualization of a cylinder created using Metashape; (

**c**) Paint strokes snapped to cylinder outline in 3-Sweep; (

**d**) MeshLab visualization of a cylinder modeled using 3-Sweep; (

**e**) Modeling a cylinder using base cylinder tool in MoReLab; (

**f**) MeshLab visualization of a cylinder mesh obtained from MoReLab.

**Figure 7.**An example of modeling a curved pipe: (

**a**) A frame of input video; (

**b**) Modeling curved pipes in Metashape; (

**c**) Paint strokes snapped to curved cylinder outlines; (

**d**) Estimation of curved pipes using 3-Sweep visualized in MeshLab; (

**e**) Bézier curve is drawn on a frame; (

**f**) Bézier curve is drawn on another frame; (

**g**) Curves on multiple frames are optimized to obtain the final Bézier curve shown by red color; (

**h**) A cylinder around the curve is created; (

**i**) A copy of the first cylinder is placed on the second pipe; (

**j**) Estimated curved cylinders are visualized in MeshLab.

**Figure 8.**Modeling cuboids and a curved pipe with tested software programs: (

**a**) Metashape reconstruction output visualized in MeshLab; (

**b**) A different view of the Metashape reconstruction visualized in MeshLab; (

**c**) Paint strokes snapped to desired object outlines; (

**d**) 3-Sweep output visualized in MeshLab; (

**e**) Estimation of desired objects in MoReLab; (

**f**) Estimated objects are visualized in MeshLab.

**Figure 10.**Measurements are taken in MoReLab. The distance of 22.454 cm between features 31 and 32 is the measurement provided for calibration. The other distances are calculated according to this reference distance.

**Figure 11.**Measurements computed in MoReLab. The distance of 7.630 cm between features 28 and 29 is the measurement provided for calibration, and other distances are calculated.

**Figure 12.**Measurements computed in MoReLab. The distance of 50 cm between features 25 and 31 is the measurement provided for calibration, and other distances are calculated.

**Figure 13.**Measurements computed in MoReLab. The distances of $55.8$ cm between features 39 and 40 are provided for calibration, and other distances are calculated.

**Figure 14.**Measurements computed in MoReLab. The distances of $22.454$, $14.046$, and $12.395$ cm are provided for calibration, and other distances are calculated.

**Figure 15.**Measurements computed in MoReLab. The distances of $7.63$, $3.355$, and $3.216$ cm are provided for calibration, and other distances are calculated.

**Figure 16.**Measurements computed in MoReLab. The distances of 50, 35, and 7 cm are provided for calibration, and other distances are calculated.

**Figure 17.**Measurements computed in MoReLab. The distances of $55.8$, 24, and $17.5$ cm are provided for calibration, and other distances are calculated.

Automatic Feature Matching | Bundle Adjustment | Rectangle/Cylinder | Curved Cylinder | Measurements | |
---|---|---|---|---|---|

Metashape | ✓ | ✓ | ✗ | ✗ | ✓ |

3-Sweep | ✗ | ✗ | ✓ | ✗ | ✗ |

MoReLab | ✗ | ✓ | ✓ | ✓ | ✓ |

**Table 2.**Results of comparing MoReLab against Metashape and 3-Sweep in terms of relative error in measurements on the first video (see Figure 10).

Method | Ground Truth (cm) | Measured Distance (cm) | Relative Error |
---|---|---|---|

Metashape | 14.046 | 13.472 | 4.087 |

12.395 | 9.809 | 20.857 | |

4.115 | 2.664 | 35.201 | |

2.616 | 6.644 | 23.136 | |

2.057 | 1.852 | 9.889 | |

Average Relative Error | 18.634 | ||

3-Sweep | 14.046 | 13.858 | 1.447 |

12.3953 | 12.669 | 2.213 | |

4.115 | 4.475 | 8.765 | |

2.616 | 3.731 | 42.621 | |

2.057 | 1.338 | 34.938 | |

Average Relative Error | 17.997 | ||

MoReLab | 14.046 | 11.564 | 17.672 |

12.3953 | 11.761 | 5.117 | |

4.115 | 4.147 | 0.783 | |

2.616 | 2.584 | 1.231 | |

2.057 | 2.015 | 2.061 | |

Average Relative Error | 5.373 |

**Table 3.**Results of comparing MoReLab against Metashape and 3-Sweep in terms of relative error in measurements on the second video (see Figure 11).

Method | Ground Truth (cm) | Measured Distance (cm) | Relative Error |
---|---|---|---|

Metashape | 3.355 | 4.161 | 24.011 |

3.216 | 3.109 | 3.316 | |

2.365 | 2.532 | 7.073 | |

2.251 | 2.626 | 16.688 | |

1.923 | 2.045 | 6.344 | |

Average Relative Error | 11.486 | ||

3-Sweep | 3.355 | 2.388 | 28.833 |

3.216 | 2.868 | 10.817 | |

2.365 | 1.954 | 17.374 | |

2.251 | 1.905 | 15.359 | |

1.923 | 1.264 | 34.249 | |

Average Relative Error | 21.326 | ||

MoReLab | 3.355 | 4.083 | 21.687 |

3.216 | 3.652 | 13.570 | |

2.365 | 2.594 | 9.695 | |

2.251 | 2.462 | 9.401 | |

1.923 | 1.926 | 0.15 | |

Average Relative Error | 10.902 |

**Table 4.**Results of comparing MoReLab against Metashape and 3-Sweep in terms of relative error in measurements on the third video (see Figure 12).

Method | Ground Truth (cm) | Measured Distance (cm) | Relative Error |
---|---|---|---|

Metashape | 35 | 39.837 | 13.82 |

7 | 5.532 | 20.971 | |

6.9 | 7.254 | 5.13 | |

6.8 | 6.523 | 4.074 | |

6.7 | 6.396 | 4.537 | |

Average Relative Error | 9.706 | ||

3-Sweep | 35 | 36.913 | 5.466 |

7 | 7.944 | 13.486 | |

6.9 | 7.251 | 5.087 | |

6.8 | 6.276 | 7.706 | |

6.7 | 7.532 | 12.418 | |

Average Relative Error | 8.833 | ||

MoReLab | 35 | 38.796 | 10.846 |

7 | 7.817 | 11.671 | |

6.9 | 7.820 | 13.333 | |

6.8 | 6.858 | 0.853 | |

6.7 | 6.713 | 0.194 | |

Average Relative Error | 4.546 |

**Table 5.**Results of comparing MoReLab against Metashape and 3-Sweep in terms of relative error in measurements on the fourth video (see Figure 13).

Method | Ground Truth (cm) | Measured Distance (cm) | Relative Error |
---|---|---|---|

Metashape | 24 | 24.45 | 1.873 |

17.5 | 15.959 | 8.843 | |

5 | 3.558 | 28.852 | |

4.2 | 3.528 | 15.974 | |

3.5 | 4.016 | 14.741 | |

Average Relative Error | 14.057 | ||

3-Sweep | 24 | 21.618 | 9.927 |

17.5 | 12.287 | 29.817 | |

5 | 3.685 | 26.289 | |

4.2 | 5.228 | 24.461 | |

3.5 | 3.815 | 9.221 | |

Average Relative Error | 19.943 | ||

MoReLab | 24 | 21.51 | 10.375 |

17.5 | 16.739 | 4.349 | |

5 | 4.592 | 8.16 | |

4.2 | 3.575 | 14.881 | |

3.5 | 3.621 | 3.457 | |

Average Relative Error | 8.244 |

Method | Ground Truth (cm) | Measured Distance (cm) | Relative Error |
---|---|---|---|

Metashape | 4.115 | 2.918 | 29.085 |

2.616 | 2.213 | 15.412 | |

2.057 | 1.864 | 9.4 | |

Average Relative Error | 17.966 | ||

3-Sweep | 4.115 | 4.549 | 10.552 |

2.616 | 3.077 | 17.613 | |

2.057 | 1.632 | 20.677 | |

Average Relative Error | 16.281 | ||

MoReLab | 4.115 | 4.497 | 9.288 |

2.616 | 2.678 | 2.362 | |

2.057 | 2.073 | 0.758 | |

Average Relative Error | 4.136 |

Method | Ground Truth (cm) | Measured Distance (cm) | Relative Error |
---|---|---|---|

Metashape | 2.25 | 2.535 | 12.645 |

1.923 | 2.07 | 7.644 | |

2.365 | 2.512 | 6.227 | |

Average Relative Error | 8.839 | ||

3-Sweep | 2.25 | 2.375 | 5.535 |

1.923 | 1.554 | 19.189 | |

2.365 | 2.202 | 6.878 | |

Average Relative Error | 10.534 | ||

MoReLab | 2.25 | 2.272 | 0.958 |

1.923 | 1.753 | 8.84 | |

2.365 | 2.431 | 2.802 | |

Average Relative Error | 4.20 |

Method | Ground Truth (cm) | Measured Distance (cm) | Relative Error |
---|---|---|---|

Metashape | 6.9 | 5.649 | 18.13 |

6.8 | 6.482 | 4.676 | |

6.7 | 6.447 | 3.776 | |

Average Relative Error | 8.861 | ||

3-Sweep | 6.9 | 6.962 | 0.899 |

6.8 | 7.940 | 16.765 | |

6.7 | 6.332 | 5.493 | |

Average Relative Error | 7.719 | ||

MoReLab | 6.9 | 7.41 | 7.391 |

6.8 | 6.553 | 3.632 | |

6.7 | 6.776 | 1.134 | |

Average Relative Error | 4.052 |

Method | Ground Truth (cm) | Measured Distance (cm) | Relative Error |
---|---|---|---|

Metashape | 5 | 3.965 | 20.7 |

4.2 | 3.164 | 24.667 | |

3.5 | 3.894 | 11.257 | |

Average Relative Error | 18.875 | ||

3-Sweep | 5 | 3.787 | 24.26 |

4.2 | 4.991 | 18.833 | |

3.5 | 3.767 | 7.629 | |

Average Relative Error | 16.907 | ||

MoReLab | 5 | 4.585 | 8.3 |

4.2 | 4.022 | 4.238 | |

3.5 | 3.547 | 1.343 | |

Average Relative Error | 4.627 |

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Siddique, A.; Banterle, F.; Corsini, M.; Cignoni, P.; Sommerville, D.; Joffe, C.
MoReLab: A Software for User-Assisted 3D Reconstruction. *Sensors* **2023**, *23*, 6456.
https://doi.org/10.3390/s23146456

**AMA Style**

Siddique A, Banterle F, Corsini M, Cignoni P, Sommerville D, Joffe C.
MoReLab: A Software for User-Assisted 3D Reconstruction. *Sensors*. 2023; 23(14):6456.
https://doi.org/10.3390/s23146456

**Chicago/Turabian Style**

Siddique, Arslan, Francesco Banterle, Massimiliano Corsini, Paolo Cignoni, Daniel Sommerville, and Chris Joffe.
2023. "MoReLab: A Software for User-Assisted 3D Reconstruction" *Sensors* 23, no. 14: 6456.
https://doi.org/10.3390/s23146456