Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (5)

Search Parameters:
Keywords = iterative closet point

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 26689 KiB  
Article
Grey Wolf Optimizer with Behavior Considerations and Dimensional Learning in Three-Dimensional Tooth Model Reconstruction
by Ritipong Wongkhuenkaew, Sansanee Auephanwiriyakul, Marasri Chaiworawitkul, Nipon Theera-Umpon and Uklid Yeesarapat
Bioengineering 2024, 11(3), 254; https://doi.org/10.3390/bioengineering11030254 - 5 Mar 2024
Cited by 2 | Viewed by 2131
Abstract
Three-dimensional registration with the affine transform is one of the most important steps in 3D reconstruction. In this paper, the modified grey wolf optimizer with behavior considerations and dimensional learning (BCDL-GWO) algorithm as a registration method is introduced. To refine the 3D registration [...] Read more.
Three-dimensional registration with the affine transform is one of the most important steps in 3D reconstruction. In this paper, the modified grey wolf optimizer with behavior considerations and dimensional learning (BCDL-GWO) algorithm as a registration method is introduced. To refine the 3D registration result, we incorporate the iterative closet point (ICP). The BCDL-GWO with ICP method is implemented on the scanned commercial orthodontic tooth and regular tooth models. Since this is a registration from multi-views of optical images, the hierarchical structure is implemented. According to the results for both models, the proposed algorithm produces high-quality 3D visualization images with the smallest mean squared error of about 7.2186 and 7.3999 μm2, respectively. Our results are compared with the statistical randomization-based particle swarm optimization (SR-PSO). The results show that the BCDL-GWO with ICP is better than those from the SR-PSO. However, the computational complexities of both methods are similar. Full article
Show Figures

Graphical abstract

18 pages, 18529 KiB  
Article
Three-Dimensional Tooth Model Reconstruction Using Statistical Randomization-Based Particle Swarm Optimization
by Ritipong Wongkhuenkaew, Sansanee Auephanwiriyakul, Marasri Chaiworawitkul and Nipon Theera-Umpon
Appl. Sci. 2021, 11(5), 2363; https://doi.org/10.3390/app11052363 - 7 Mar 2021
Cited by 7 | Viewed by 2736
Abstract
The registration between images is a crucial part of the 3-D tooth reconstruction model. In this paper, we introduce a registration method using our proposed statistical randomization-based particle swarm optimization (SR-PSO) algorithm with the iterative closet point (ICP) method to find the optimal [...] Read more.
The registration between images is a crucial part of the 3-D tooth reconstruction model. In this paper, we introduce a registration method using our proposed statistical randomization-based particle swarm optimization (SR-PSO) algorithm with the iterative closet point (ICP) method to find the optimal affine transform between images. The hierarchical registration is also utilized in this paper since there are several consecutive images involving in the registration. We implemented this algorithm in the scanned commercial regular-tooth and orthodontic-tooth models. The results demonstrated that the final 3-D images provided good visualization to human eyes with the mean-squared error of 7.37 micrometer2 and 7.41 micrometer2 for both models, respectively. From the results compared with the particle swarm optimization (PSO) algorithm with the ICP method, it can be seen that the results from the proposed algorithm are much better than those from the PSO algorithm with the ICP method. Full article
(This article belongs to the Special Issue Data Technology Applications in Life, Diseases, and Health)
Show Figures

Figure 1

22 pages, 1372 KiB  
Article
Continuous Driver’s Gaze Zone Estimation Using RGB-D Camera
by Yafei Wang, Guoliang Yuan, Zetian Mi, Jinjia Peng, Xueyan Ding, Zheng Liang and Xianping Fu
Sensors 2019, 19(6), 1287; https://doi.org/10.3390/s19061287 - 14 Mar 2019
Cited by 35 | Viewed by 5558
Abstract
The driver gaze zone is an indicator of a driver’s attention and plays an important role in the driver’s activity monitoring. Due to the bad initialization of point-cloud transformation, gaze zone systems using RGB-D cameras and ICP (Iterative Closet Points) algorithm do not [...] Read more.
The driver gaze zone is an indicator of a driver’s attention and plays an important role in the driver’s activity monitoring. Due to the bad initialization of point-cloud transformation, gaze zone systems using RGB-D cameras and ICP (Iterative Closet Points) algorithm do not work well under long-time head motion. In this work, a solution for a continuous driver gaze zone estimation system in real-world driving situations is proposed, combining multi-zone ICP-based head pose tracking and appearance-based gaze estimation. To initiate and update the coarse transformation of ICP, a particle filter with auxiliary sampling is employed for head state tracking, which accelerates the iterative convergence of ICP. Multiple templates for different gaze zone are applied to balance the templates revision of ICP under large head movement. For the RGB information, an appearance-based gaze estimation method with two-stage neighbor selection is utilized, which treats the gaze prediction as the combination of neighbor query (in head pose and eye image feature space) and linear regression (between eye image feature space and gaze angle space). The experimental results show that the proposed method outperforms the baseline methods on gaze estimation, and can provide a stable head pose tracking for driver behavior analysis in real-world driving scenarios. Full article
Show Figures

Figure 1

19 pages, 12278 KiB  
Article
A Single LiDAR-Based Feature Fusion Indoor Localization Algorithm
by Yun-Ting Wang, Chao-Chung Peng, Ankit A. Ravankar and Abhijeet Ravankar
Sensors 2018, 18(4), 1294; https://doi.org/10.3390/s18041294 - 23 Apr 2018
Cited by 76 | Viewed by 9522
Abstract
In past years, there has been significant progress in the field of indoor robot localization. To precisely recover the position, the robots usually relies on multiple on-board sensors. Nevertheless, this affects the overall system cost and increases computation. In this research work, we [...] Read more.
In past years, there has been significant progress in the field of indoor robot localization. To precisely recover the position, the robots usually relies on multiple on-board sensors. Nevertheless, this affects the overall system cost and increases computation. In this research work, we considered a light detection and ranging (LiDAR) device as the only sensor for detecting surroundings and propose an efficient indoor localization algorithm. To attenuate the computation effort and preserve localization robustness, a weighted parallel iterative closed point (WP-ICP) with interpolation is presented. As compared to the traditional ICP, the point cloud is first processed to extract corners and line features before applying point registration. Later, points labeled as corners are only matched with the corner candidates. Similarly, points labeled as lines are only matched with the lines candidates. Moreover, their ICP confidence levels are also fused in the algorithm, which make the pose estimation less sensitive to environment uncertainties. The proposed WP-ICP architecture reduces the probability of mismatch and thereby reduces the ICP iterations. Finally, based on given well-constructed indoor layouts, experiment comparisons are carried out under both clean and perturbed environments. It is shown that the proposed method is effective in significantly reducing computation effort and is simultaneously able to preserve localization precision. Full article
(This article belongs to the Special Issue Selected Sensor Related Papers from ICI2017)
Show Figures

Figure 1

21 pages, 7452 KiB  
Article
3D Ear Normalization and Recognition Based on Local Surface Variation
by Yi Zhang, Zhichun Mu, Li Yuan, Hui Zeng and Long Chen
Appl. Sci. 2017, 7(1), 104; https://doi.org/10.3390/app7010104 - 21 Jan 2017
Cited by 23 | Viewed by 7052
Abstract
Most existing ICP (Iterative Closet Point)-based 3D ear recognition approaches resort to the coarse-to-fine ICP algorithms to match 3D ear models. With such an approach, the gallery-probe pairs are coarsely aligned based on a few local feature points and then finely matched using [...] Read more.
Most existing ICP (Iterative Closet Point)-based 3D ear recognition approaches resort to the coarse-to-fine ICP algorithms to match 3D ear models. With such an approach, the gallery-probe pairs are coarsely aligned based on a few local feature points and then finely matched using the original ear point cloud. However, such an approach ignores the fact that not all the points in the coarsely segmented ear data make positive contributions to recognition. As such, the coarsely segmented ear data which contains a lot of redundant and noisy data could lead to a mismatch in the recognition scenario. Additionally, the fine ICP matching can easily trap in local minima without the constraint of local features. In this paper, an efficient and fully automatic 3D ear recognition system is proposed to address these issues. The system describes the 3D ear surface with a local feature—the Local Surface Variation (LSV), which is responsive to the concave and convex areas of the surface. Instead of being used to extract discrete key points, the LSV descriptor is utilized to eliminate redundancy flat non-ear data and get normalized and refined ear data. At the stage of recognition, only one-step modified iterative closest points using local surface variation (ICP-LSV) algorithm is proposed, which provides additional local feature information to the procedure of ear recognition to enhance both the matching accuracy and computational efficiency. On an Inter®Xeon®W3550, 3.07 GHz work station (DELL T3500, Beijing, China), the authors were able to extract features from a probe ear in 2.32 s match the ear with a gallery ear in 0.10 s using the method outlined in this paper. The proposed algorithm achieves rank-one recognition rate of 100% on the Chinese Academy of Sciences’ Institute of Automation 3D Face database (CASIA-3D FaceV1, CASIA, Beijing, China, 2004) and 98.55% with 2.3% equal error rate (EER) on the Collection J2 of University of Notre Dame Biometrics Database (UND-J2, University of Notre Dame, South Bend, IN, USA, between 2003 and 2005). Full article
Show Figures

Figure 1

Back to TopTop