3D Reconstruction of Space Objects from Multi-Views by a Visible Sensor
AbstractIn this paper, a novel 3D reconstruction framework is proposed to recover the 3D structural model of a space object from its multi-view images captured by a visible sensor. Given an image sequence, this framework first estimates the relative camera poses and recovers the depths of the surface points by the structure from motion (SFM) method, then the patch-based multi-view stereo (PMVS) algorithm is utilized to generate a dense 3D point cloud. To resolve the wrong matches arising from the symmetric structure and repeated textures of space objects, a new strategy is introduced, in which images are added to SFM in imaging order. Meanwhile, a refining process exploiting the structural prior knowledge that most sub-components of artificial space objects are composed of basic geometric shapes is proposed and applied to the recovered point cloud. The proposed reconstruction framework is tested on both simulated image datasets and real image datasets. Experimental results illustrate that the recovered point cloud models of space objects are accurate and have a complete coverage of the surface. Moreover, outliers and points with severe noise are effectively filtered out by the refinement, resulting in an distinct improvement of the structure and visualization of the recovered points. View Full-Text
Share & Cite This Article
Zhang, H.; Wei, Q.; Jiang, Z. 3D Reconstruction of Space Objects from Multi-Views by a Visible Sensor. Sensors 2017, 17, 1689.
Zhang H, Wei Q, Jiang Z. 3D Reconstruction of Space Objects from Multi-Views by a Visible Sensor. Sensors. 2017; 17(7):1689.Chicago/Turabian Style
Zhang, Haopeng; Wei, Quanmao; Jiang, Zhiguo. 2017. "3D Reconstruction of Space Objects from Multi-Views by a Visible Sensor." Sensors 17, no. 7: 1689.
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.