Next Article in Journal
Enabling Technologies for Operator 4.0: A Survey
Next Article in Special Issue
High Resolution Computer-Generated Rainbow Hologram
Previous Article in Journal
A New Method to Retrieve the Three-Dimensional Refractive Index and Specimen Size Using the Transport Intensity Equation, Taking Diffraction into Account
Previous Article in Special Issue
Compression of Phase-Only Holograms with JPEG Standard and Deep Learning
Article

An Efficient Neural Network for Shape from Focus with Weight Passing Method

1
School of Mechanical Engineering, Gwangju Institute of Science and Technology, 123 Cheomdangwagi-ro, Buk-gu, Gwangju 61005, Korea
2
School of Computer Science and Engineering, Korea University of Technology and Education, 1600 Chungjeolno, Byeogchunmyun, Cheonan 31253, Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2018, 8(9), 1648; https://doi.org/10.3390/app8091648
Received: 23 August 2018 / Revised: 10 September 2018 / Accepted: 12 September 2018 / Published: 13 September 2018
(This article belongs to the Special Issue Holography, 3D Imaging and 3D Display)
In this paper, we suggest an efficient neural network model for shape from focus along with weight passing (WP) method. The neural network model is simplified by reducing the input data dimensions and eliminating the redundancies in the conventional model. It helps for decreasing computational complexity without compromising on accuracy. In order to increase the convergence rate and efficiency, WP method is suggested. It selects appropriate initial weights for the first pixel randomly from the neighborhood of the reference depth and it chooses the initial weights for the next pixel by passing the updated weights from the present pixel. WP method not only expedites the convergence rate, but also is effective in avoiding the local minimization problem. Moreover, this proposed method may also be applied to neural networks with diverse configurations for better depth maps. The proposed system is evaluated using image sequences of synthetic and real objects. Experimental results demonstrate that the proposed model is considerably efficient and is able to improve the convergence rate significantly while the accuracy is comparable with the existing systems. View Full-Text
Keywords: shape from focus; neural network; weight passing shape from focus; neural network; weight passing
Show Figures

Figure 1

MDPI and ACS Style

Kim, H.-J.; Mahmood, M.T.; Choi, T.-S. An Efficient Neural Network for Shape from Focus with Weight Passing Method. Appl. Sci. 2018, 8, 1648. https://doi.org/10.3390/app8091648

AMA Style

Kim H-J, Mahmood MT, Choi T-S. An Efficient Neural Network for Shape from Focus with Weight Passing Method. Applied Sciences. 2018; 8(9):1648. https://doi.org/10.3390/app8091648

Chicago/Turabian Style

Kim, Hyo-Jong, Muhammad T. Mahmood, and Tae-Sun Choi. 2018. "An Efficient Neural Network for Shape from Focus with Weight Passing Method" Applied Sciences 8, no. 9: 1648. https://doi.org/10.3390/app8091648

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop