Next Article in Journal
Xylene versus Isopropanol for Paraffin Wax Processing of Lung Tissue
Previous Article in Journal
Effects of Home Exercise and Manual Therapy or Supervised Exercise on Nonspecific Chronic Low Back Pain and Disability
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Automatic Algorithm of Optimizing the Position of Structured Light Sensors

School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou 450001, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(5), 1719; https://doi.org/10.3390/app14051719
Submission received: 24 January 2024 / Revised: 13 February 2024 / Accepted: 18 February 2024 / Published: 20 February 2024

Abstract

:
Optical 3D detection technology has a wide range of applications in industrial detection, agricultural production, and so on. Its advantages are non-contact, efficiency, and high precision. However, the specular reflection problem affects model coverage and measurement accuracy. An optimization algorithm for calculating the number and pose of sensors has been proposed to address this issue. First, the specular reflection problem is viewed as a multi-sensor position search problem. Then, an optimization algorithm is used to find the optimal number and bit positions of sensors to avoid specular reflection. The experiment shows that the optimization results of this algorithm can cover the area to be measured with the least number of sensor combinations while avoiding the influence of specular reflection.

1. Introduction

With the rapid development of modern industrial fields such as ships, automobiles, aerospace, and molds, not only the requirements for product design and manufacturing are increasingly increasing, but also the testing requirements for product processing are more stringent. The traditional 3D detection equipment, such as 3D coordinate machines, has limited measurement efficiency and easy wear on the surface of the workpiece. In the past few decades, remarkable progress has been made in non-contact 3D morphology reconstruction techniques. Among them, structural optical measurement technology has been widely used in the field of high-precision measurement, such as product processing, because of its non-contact nature, large range, high efficiency, and high precision. The development of this technology not only improves measurement efficiency but also reduces interference to the surface of the workpiece, meeting the growing demand of modern industry for product quality and precision. However, in this process, the specular reflection problem [1] seriously reduces the accuracy of 3D measurement. Therefore, dealing with the specular reflection problem is a crucial consideration in structured light 3D measurement applications. To cope with this problem, a series of effective methods are required.
There are six main categories of technologies to address specular reflections: polarization technology, structured light intensity adjustment, fully automatic exposure, reflective component separation technology, multi-angle shooting technology, and sensor setting optimization algorithms.
To address the problem of highlights in 3D measurements, some researchers [2,3,4,5,6] have used the technique of polarization to deal with highlights during the measurement process. Zhu et al. [2] proposed a polarization-based highlight removal method in order to achieve highlight removal from high-reflectivity surfaces. In the polarization-based method, images are captured at different polarization angles, and multiple polarized images are synthesized to reduce the highlight brightness. A normalized weighting algorithm is also used to recover the highlights that cannot be recovered by the polarization image synthesis. Zhu et al. [3] proposed a polarization-based method for three-dimensional measurement of the light intensity response function of a camera. The light intensity response function of the camera under the polarization system was established. The method avoids the complex model of the polarization bidirectional reflection distribution function and directly quantifies the required angle between the transmission axes of two polarization filters. It is then combined with an image fusion algorithm to generate an optimal streak map. Moreover, the high signal-to-noise ratio and contrast of the image are maintained after adding the polarization filter. Kong et al. [4] designed a measurement system to acquire images with different polarization angles, and the highlighted areas in the images are used as training samples for the BP neural network. The initial weights of the neural network are Gaussian distributions. The stereo matching accuracy can be effectively improved so as to recover the information from specular vision measurements in the highlighted region. Liang et al. [5] proposed a polarized structured light method to solve the problem of the reconstruction of high-reflectivity surfaces. The built-in structured light system involves a four-channel polarization camera and a digital light processing (DLP) projector equipped with a polarizer lens. The built-in system is capable of simultaneously acquiring four sets of streak images, each with a different luminance difference. Then, a binary time-multiplexed structured light method is used to acquire four different point clouds. The proposed method has been shown to achieve excellent reconstruction results on highly reflective surfaces. Huang et al. [6] devise a polarization-structured light 3D sensor for solving these problems, in which high-contrast-grating (HCG) vertical-cavity surface-emitting lasers (VCSELs) are used to exploit the polarization property.
A few researchers [7,8,9,10] dealt with the high-light problem by adjusting the light intensity of the projection pattern. Lin et al. [7] proposed a method to automatically determine the optimal number of light intensities and the corresponding light intensity values for streak map projection based on the surface reflectance of the object under test. The original streak images captured under different light intensities are synthesized pixel by pixel into a composite streak image, which can be further used for phase recovery and conversion to 3D coordinates. Cao et al. [8] used a transparent screen as an optical mask for the camera so as to indirectly adjust the intensity of the projected pattern. By placing a transparent screen in front of the camera and adjusting the luminance of the corresponding screen pixels, the brightness of each pixel of the camera can be precisely controlled. precisely control the brightness of each pixel of the camera. Fu et al. [9] proposed a set of hardware devices and region-adaptive structured light algorithms. Based on the accurate optical modeling of the measurement scene, the principles for generating the adaptive optimal projection brightness are derived in detail, and the process is accelerated by exploiting the speed of the neural network. Meanwhile, they creatively use the chain code combined with the M-estimator sample consensus method to find the homography from the saturated region of the cam-era plane to the corresponding region of the projector plane for generating fringe images with adaptive brightness. Furthermore, they chose the more robust line-shifting code and binarization method for subpixel 3D reconstruction. Zhou et al. [10] proposed a method. The multi-intensity projection method was adopted by reducing the input projection intensity step-by-step and reconstructing the remaining pixels around the specular angle. Finally, the reconstruction result can be obtained by stitching point clouds at each projection intensity. Experiments verified that the proposed method could improve the integrity of reconstructed point clouds and measurement efficiency.
Rao et al. [11] proposed a stripe projection contouring method based on fully automated multiple exposures, which requires human intervention and greatly simplifies the whole reconstruction process. It is mathematically proven that once the modulation of a pixel is greater than a threshold value, the phase quality of that pixel can be considered satisfactory. This threshold can be used to guide the calculation of the required exposure time. The software then automatically adjusts the exposure time of the camera and captures the desired streak images. Using these captured images, a final reconstruction with a high dynamic range can be easily obtained. Wu et al. [12] proposed an exposure fusion (EF)-based structured light method to accurately reconstruct a three-dimensional model of the object with a special reflective surface. A robust binary gray code is adopted as our structured light pattern. They fuse a group of images with different exposure times into a single image with a high dynamic range. With the help of the EF technique, captured images that have many overexposed and underexposed regions can be well exposed. The EF method, which has a simple operational procedure, is adopted in their work. Based on the EF method, precise 3D reconstruction of a reflective surface can be realized. Song et al. [13] proposed a novel structured light approach for the 3D reconstruction of specular surfaces. The binary shifting strip is adopted as a structured light pattern instead of a conventional sinusoidal pattern. Based on the framework of conventional high-dynamic range imaging techniques, an efficient means is first introduced to estimate the camera response function. Then, the dynamic range of the generated radiance map is compressed in the gradient domain by introducing an attenuation function. Subject to the change in lighting conditions caused by projecting different structured light patterns, the structure light image with the middle exposure level is selected as the reference image and used for the slight adjustment of the primary fused image. Finally, the regenerated structured light images with well-exposing conditions are used for 3D reconstruction of the specular surface. To evaluate the performance of the method, some stainless steel stamping parts with strong reflectivity were used for the experiments. The results showed that different specular targets with various shapes can be precisely reconstructed by the proposed method.
Sun et al. [14] designed a new algorithm based on reflective component separation (RCS) and priority region filling theory. The specular pixels in the image are first found by comparing the pixel parameters. Then, the reflection components are separated and processed. However, for objects such as ceramics and metals with strong specular highlights, the RCS theory will change the color information of the highlight pixels due to the larger specular reflection component. In this case, the preferred region filling theory is used to recover the color information. Schematic diagram of the principle of multi-angle measurement: the basic idea is to collect images of the same object from different angles so that the highlight areas of different images do not overlap, thus recovering data from the same scene. There are many subsequent data processing methods, which can be used with binocular vision [15,16,17,18], monocular structured light, or directly using the parallax method to recover depth information.
Qian et al. [19] came up with a computational method to compute the optimal sensor setup, taking into account sensor/part interactions, in order to reduce the dynamic range of the signal and increase the model coverage of structured light or similar optical detection systems. First, the signal dynamic range problem is transformed into a distance problem in spherical mapping. Then, a new algorithm on the spherical map is proposed to search for a near-optimal sensor orientation. Based on this near-optimal orientation, to obtain the optimal solution, make the lowest possible dynamic range. However, Qian [19] only optimizes for a group of sensor systems and not for multi-sensor systems.
The polarization-based measurement method can effectively reduce the specular reflection problem, but the hardware requirements are relatively high. Structural light intensity adjustment technology, from the light source to reduce the probability of specular reflection, has high hardware requirements but also requires calculating the correlation coefficient of the intensity of the light source. The fully automated exposure technology solves the problem of specular reflection from the angle of the shooting, but it may lead to part of the surface brightness being too dark, as well as the need for large-scale splicing. The reflection component separation technology solves the specular reflection problem by separating the reflection information, but when the specular reflection intensity is high, there will be a loss of information. Multi-angle shooting technology shoots from multiple angles, but it cannot automatically optimize the number of devices and positional information. Sensor setup optimization technology optimizes specular reflection for a single device on a flat surface, but it does not optimize multiple devices on the whole surface of the object to measure specular reflection. The sensor setup optimization technique optimizes the specular reflection of a single device for a flat surface but not for the whole surface of the object.
In order to solve the above-mentioned specular reflection problem, an optimization method is proposed to calculate the positions that can avoid specular reflection for any object. This method solves the problem: when using multi-angle methods in the presence of specular reflection areas, optimize the corresponding optimal number of sensors and specific positions.
The second part introduces the optimization method for the number and position of multiple sensors. The third part introduces simulation experiments and validation experiments. The fourth part is a summary.

2. Optimization Method for the Number and Position of Multiple Sensors

Objects with high reflection coefficients, such as metals, are also objects of three-dimensional measurement. However, in actual measurements, there are often several specular reflection areas, which leads to a loss of measurement information and affects the measurement. The specular reflection areas are shown in Figure 1. Multi-angle is one of the methods used to solve the problem of specular reflection. The principle of a multi-angle sensor is shown in Figure 2, taking two sets of sensors as an example. The specular reflection areas of the two sets of sensors are different, and the non-specular reflection areas of the two sets of sensors are complementary. By fusing the non-specular areas of two sets of sensors, the problem of specular reflection is solved. During this process, the coverage area and specular reflection area of the sensor need to be manually adjusted and optimized. Even for professionals, facing objects with diverse shapes can take a long time, and the optimized sensor positions and quantities may not necessarily be optimal.
However, some people’s research has focused on the method of multi-angle sensors, and no one has paid attention to this issue: when using multi-angle methods in the presence of specular reflection areas, optimize the corresponding optimal number of sensors and specific positions. Aiming at this issue, this article proposes an automatic algorithm for optimizing the position of structured light sensors.
The overall idea of the system optimization algorithm is shown in Figure 3. First, the 3D point cloud data is processed, and then the sensor position information is initialized. A 3D point cloud can be obtained from the design document. Next, the sensor coverage area is analyzed based on the calculation of the coverage area of the sensor group. Furthermore, the specular reflection area of the sensor group is calculated, and finally the position and pose of the sensor are optimized.

2.1. Initialize Sensor Position Information

For measuring objects that already have a 3D point cloud. After obtaining the 3D object point cloud data, the sensor position information can be initialized. First, the projector was placed on a spherical surface centered on a point-cloud object. Next, based on the position of the projector, draw a circle on the outer cut plane and place the camera corresponding to the projector in that plane. Figure 4 shows the tested object (a turbine blade) and its 3D model. Figure 5 shows the overall system schematic diagram. In Figure 5, a black box represents a projector, and a blue box represents a camera. Each box represents a sensor. The red dashed line represents the projector’s field of view, while the blue dashed line represents the camera’s field of view. The gray spherical net represents the position of the projector that can be optimized. The innermost black object is the object to be measured. Initialize several pairs of sensors based on the measured object.

2.2. Covered Area Judgment

Before the sensor correlation optimization, the first priority is to clarify the covered area of the sensor. The covered area of the projector refers to the area of the pattern projected onto the surface of the measured object, while the covered area of the camera refers to the surface area of the measured object that the camera can capture. The covered area is judged in three steps, respectively: calculating the initial covered area, calculating the covered area of the sensor, and calculating the covered area of the sensor group.
Calculating the initial covered area
When dealing with large-scale 3D point cloud data, the computational complexity is high. Therefore, a preliminary covered area estimation is first needed to determine the possible coverage of the sensor. Figure 6a is a schematic diagram of the specular reflection of the light in the plane. Figure 6a represents the vector of the incoming light, N represents the normal vector of the surface, and R represents the specular reflection vector of the incident light I. Within the covered area of the sensor, the normal vector N of the surface and the incident vector I of the sensor are negative. The initial covered area is calculated using Equation (1). When I and N comply with Equation (1), the incident light may pass through the triangular surface where N is located. When they do not match, the incident light does not pass through the triangular surface where N is located. Next, further detection is carried out on the triangular surface that conforms to Equation (1).
Figure 6. Schematic diagram of the optical path. (a) is a schematic diagram of the specular reflection of the light in the plane. (b) is the light propagation diagram of the sensor group.
Figure 6. Schematic diagram of the optical path. (a) is a schematic diagram of the specular reflection of the light in the plane. (b) is the light propagation diagram of the sensor group.
Applsci 14 01719 g006
I N < 0
Calculating the covered area of the sensor
After the initial covered area calculation, there is a small portion of the surface that is not within the effective coverage area of the sensor. Therefore, an accurate sensor-covered area calculation is required to exclude the above errors. By traversing the preliminary calculated coverage area, calculate whether there are occluded surfaces between these surfaces and the sensors. If there is an obstructed surface, mark it as a non-covered area. If there is no occluded surface, it is marked as a valid coverage area. This calculation is based on the Moller-Trumbore algorithm [20]. The idea of this algorithm is shown in Figure 7. Calculation formulas such as Equations (2) and (3). In Figure 7, P0 represents the center of gravity of the triangular face to be measured, P1, P2, and P3 represent the three vertices of any other triangles that are within the initial coverage area, O represents the sensor, and -d denotes the ray direction vector pointing to O from P0. The parameters t, b2, and b3 in Equation (2)), and b1 in Equation (3) are the parameters to be solved. When all these parameters t, b2, b3, and b1 are greater than 0, it means that there is an intersection of the ray with the triangular plane. Therefore, the triangular face where P0 is not located in the effective covered area of the sensor.
Figure 7. Covered area judgment.
Figure 7. Covered area judgment.
Applsci 14 01719 g007
O + t × d = ( 1 b 2 b 3 3 ) × P 1 + b 1 × P 2 + b 2 × P 3
b 1 = 1 b 2 b 3
Calculating the covered area of the sensor group
In 3D structured light measurement, each sensor group includes at least one projector and one camera. In the measurement of the sensor group, the measurement area is the intersection of the covered areas of the projector and the camera. Therefore, after the calculation of the coverage area of a single sensor is completed, calculating the coverage area of a sensor group is relatively simple and only requires determining the common coverage area of the projector and the camera.

2.3. Specular Reflection Area Identification

After calculating the coverage area of a sensor group, it is necessary to identify the specular reflection areas for each group of sensors. These areas are used as one of the parameter values optimized for the sensor system. The light propagation diagram of the sensor group is shown on the right in Figure 6b, where V is the pointing vector of the camera, I is the pointing vector of the projector, and R is the specular reflection vector of the projector’s light on the surface. Specular reflection is the specular reflection of light on the surface of a smooth object where the angle between the reflected light and the direction of observation is small. It causes the brightness of the photographed object to exceed the camera’s dynamic range, white areas to appear, and a loss of 3D measurement information. It should be noted that the specular reflection of each sensor group is closely related to the position and orientation of the sensors. Therefore, special attention is needed to solve the specular reflection area in the sensor system design. Identifying a specular reflection area involves calculating the angle between the reflection vector R and the negative vector −V of the observation vector. When this angle is within a specific range, the triangular surface is a specular reflection region. The reflection vector R is calculated by Equation (4).
R = I 2 ( I N ) N

2.4. Sensor Number and Pose Optimization

The sensor optimization algorithm is used to solve the problem of specular reflection in structured light 3D measurements. This algorithm is based on the improvement of the differential evolution algorithm [21,22,23]. The main objective of the algorithm is to optimize the sensor system and derive the number and position of projectors and cameras. The algorithm is based on an improved differential optimization algorithm that is able to quickly iterate over the optimal number and position of sensors. This algorithm is efficient for optimizing the sensor layout because it can adaptively adjust the parameters based on the number of iterations, accelerating the convergence of the results to obtain sensor data that meets expectations. The flowchart of sensor number and position optimization is shown in Figure 8.
To simplify the computational complexity, the specular reflection problem is transformed into a sensor position search problem. The initial data of the algorithm are D rows and N columns of integer-valued parameters. N denotes N sensor combinations, D denotes D sensors, and each combination contains the position information of D sensors. The initial combination is denoted by X. Then, the mutation operation is performed according to Equation (5), where i, r1, r2, and r3 denote the dimensions, and r1, r2, and r3 are integers that are different from each other. F is the variation operator, which is calculated by Equation (6), where F0 is the initial “mutation” operator and G denotes the maximum number of iterations of the optimization algorithm. Next, boundary condition constraints are imposed on the variant combinations to ensure that the newly generated combinations are in the feasible domain. After comparing the objective function values of the initial combination X and the variant combination V, a crossover operation is performed. Replace the initial combination Xi with the corresponding combination in the variant combination Vi, whose function value is better than that of the initial combination Xi. Furthermore, determine whether the parameter value D needs to be changed and whether the termination condition is satisfied. If the termination condition is satisfied, the optimal combination is the output. The objective function value is calculated by Equation (7), where Ffein is the non-covered area and specular reflection area of the nth sensor in a sensor combination, Fsum is the non-covered area and non-specular reflection area of the combination, and the smaller Fsum is the better area. The pseudocode of the algorithm is shown in Appendix A.
V i = X r 1 + F ( X r 2 X r 3 )
F = F 0 2 λ , λ = e 1 G m G m + 1 G
F s u m = F f e i 1 F f e i 2 F f e i n

3. Simulation

The simulation is verified for the turbine blade body of the airplane, which is shown in Figure 4. The following approximation was used: the projector was considered a point light source, and the surface of the object was considered a complete specular reflection. Based on the light from the projector, the characteristics of the surface of the object under test, and the pointing of the camera, the possible specular reflection areas in a set of sensors are predicted. The relevant parameters optimized by the simulation are shown in Table 1. The hardware processor used in this simulation is the Intel (R) Core (TM) i5-4590 CPU @ 3.30 GHz; the Windows version used is Windows 10 Professional Edition; the simulation software used is Matlab R2022a; and the verification software used is 3Dmax 2020.
Based on these parameters and the 3D data of the object, specular reflection region prediction is performed. The performance of the sensor combination is also evaluated through simulation verification. This helps to optimize the sensor configuration, enabling accurate measurement data to be obtained for 3D structured light measurements, as well as providing strong support for the analysis and design of the turbine blade body. The optimal sensor set coverage obtained after sensor system optimization is shown below:
Figure 9 shows the optimization results of the algorithm and the validation of the specular reflection area. There are three images in Figure 9a1–a3, and the result of sensor optimization is that only three pairs of sensors are needed to cover the measurement surface at once. The black areas in Figure 9a1–a3, represent the covered areas of the three sets of sensors. The red area is the specular reflection area. Due to the small specular reflection areas in Figure 9a1,a3, we used a red circle to indicate the specular reflection area. By comparing the red areas in the three photos, it can be seen that the specular reflection areas within the coverage areas of the three sets of sensors are different. By comparing the black areas, it can be seen that there is overlap in the coverage areas of the three sets of sensors. By comparing the red and black parts, it can be seen that the specular reflection area of one set of sensors is the non-specular reflection area in the coverage area of another set of sensors. The non-highlight coverage area of three sets of sensors covers the measurement surface of the object. Therefore, from the overall perspective of the three sets of sensors, the problem of specular reflection has been solved, and Figure 9b1–b3 is a specular reflection area validation by 3Dmax. The highlighted white areas in Figure 9b1–b3 are the specular reflection areas calculated by 3Dmax. By comparing the results of groups Figure 9b1–b3,c1–c3, it can be verified that the specular reflection area calculated by the optimization algorithm is accurate. Hence, it also proves that the sensors optimized by the optimization algorithm can cover the measurement area while solving the specular reflection problem.
Figure 10a is a schematic diagram of sensor positions optimized by the algorithm. Figure 10b is a schematic diagram of sensor positions optimized by the expert. Figure 9c1–c3 is an optimization diagram of the number and location of sensors manually selected by experts. Artificial intelligence generally tends to place sensors directly in front, behind, or above objects, and the algorithm can consider more sensor positions than manual labor. From the sensor coverage area in Figure 9, it can be seen that the optimization results of the algorithm are better than those of manual methods. When manually optimizing the number and location of sensors, it requires a significant amount of time. Moreover, the optimized results may not be the best. When a large number of sensors are required, the efficiency of manual optimization is much lower than that of algorithms. Hence, the algorithm proposed in this article improves the efficiency of optimizing the number and position of sensors.

4. Conclusions

For the problem of specular reflection in three-dimensional measurement, this paper proposes a sensor optimization algorithm. The algorithm optimizes the number and pose of sensors in the measurement system. The goal of optimization is to minimize the number of sensors as much as possible while avoiding the impact of specular reflection. It can improve the efficiency of determining sensor coverage areas and reduce subsequent measurement costs. First, the specular reflection problem is viewed as a multi-sensor position search problem. Then, an optimization algorithm is used to find the optimal number and bit positions of sensors to avoid specular reflection. Experiments show that the best multi-sensor system obtained by the algorithm improves the model coverage rate and solves the specular reflection problem.
This article optimizes the sensor for measuring turbine blades. The result of algorithm optimization is that three sets of sensors can cover the overall measurement area of the blade while solving the problem of specular reflection. This algorithm minimizes the number of sensors as much as possible while ensuring that the sensor’s field of view covers the measurement area. This algorithm achieves automated matching of multiple sensor coverage areas, highlight areas, and object measurement areas, saving time and improving efficiency.
We believe that future work should include the following aspects: (1) optimizing the structure of algorithms and improving their operational efficiency; (2) considering large objects as measurement objects and optimizing the sensors; and (3) writing an optimization software based on algorithms.

Author Contributions

Conceptualization, Q.X.; methodology, Q.X. and X.S.; software, Z.Z.; validation, Z.Z. and Q.X.; formal analysis, Z.Z.; investigation, Q.X.; resources, Q.X.; data curation, Z.Z. and X.Y.; writing—original draft preparation, Z.Z.; writing—review and editing, Z.Z. and Q.X.; visualization, Q.X.; supervision, Q.X.; project administration, Q.X.; funding acquisition, Q.X. and X.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Henan Provincial Science and Technology Research Project (No. 222102210073) and the National Natural Science Foundation of China (No. 61705198).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data can be obtained from the corresponding author upon reasonable request. The data are not publicly available due to privacy.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

The following figure is the pseudocode of the algorithm. It is written and run by Matlab R2020a. The operating system used is Windows 10.
Figure A1. The pseudo-code of the proposed algorithm.
Figure A1. The pseudo-code of the proposed algorithm.
Applsci 14 01719 g0a1

Appendix B

Table A1 is the location and quantity information of the sensors in the algorithm. Table A2 is the location and quantity information of the sensors used by experts.
Table A1. Optimized sensor results.
Table A1. Optimized sensor results.
No.Projector Coordinates (mm)Camera Coordinates (mm)Target Coordinates (mm)Manual/Automatic
1−451.137, −2.0, 867.668−569.321, −102.238, 797.7747.0, −2.0, 93.0Automatic
2397.144, −285.456, 852.895328.352, −439.339, 830.8127.0, −2.0, 93.0
3403.814, 544.167, 688.181254.823, 566.627, 766.9057.0, −2.0, 93.0
Table A2. Sensor results of an expert person.
Table A2. Sensor results of an expert person.
No.Projector Coordinates (mm)Camera Coordinates (mm)Target Coordinates (mm)Manual/Automatic
1281.024, 533.956, 532.998356.202, 453.485, 615.7437.0, −2.0, 93.0Manual (expert)
2576.455, −826.899, 237.755512.058, −841.154, 30.1877.0, −2.0, 93.0
3−707.967, 493.307, 369.1981−644.673, 484.098, 306.2077.0, −2.0, 93.0

References

  1. Tang, J.; Xu, X.; Peng, Y.; Xue, L.; Xie, K. Review of Highlight Suppression Methods for Structured Light 3D Measurement. In Proceedings of the ICGSP 2022: 2022 the 6th International Conference on Graphics and Signal Processing, Chiba, Japan, 1–3 July 2022; pp. 63–70. [Google Scholar] [CrossRef]
  2. Zhu, Z.; Xiang, P.; Zhang, F. Polarization-based method of highlight removal of high-reflectivity surface. Optik 2020, 221, 165345. [Google Scholar] [CrossRef]
  3. Zhu, Z.M.; Zhu, W.T.; Zhou, F.Q.; Yang, C. Three-dimensional measurement of fringe projection based on the camera response function of the polarization system. Opt. Eng. 2021, 60, 055105. [Google Scholar] [CrossRef]
  4. Kong, L.; Sun, X.; Rahman, M.; Xu, M. A 3D measurement method for specular surfaces based on polarization image sequences and machine learning. CIRP Ann. 2020, 69, 497–500. [Google Scholar] [CrossRef]
  5. Liang, J.; Ye, Y.; Gu, F.; Zhang, J.; Zhao, J.; Song, Z. A Polarized Structured Light Method for the 3D Measurement of High-Reflective Surfaces. Photonics 2023, 10, 695. [Google Scholar] [CrossRef]
  6. Huang, X.L.; Wu, C.Y.; Xu, X.L.; Wang, B.S.; Zhang, S.; Shen, C.H.; Yu, C.N.; Wang, J.X.; Chi, N.; Yu, S.H.; et al. Polarization structured light 3D depth image sensor for scenes with reflective surfaces. Nat. Commun. 2023, 14, 6855. [Google Scholar] [CrossRef] [PubMed]
  7. Lin, H.; Han, Z. Automatic optimal projected light intensity control for digital fringe projection technique. Opt. Commun. 2021, 484, 126574. [Google Scholar] [CrossRef]
  8. Cao, J.; Li, C.; Li, C.; Zhang, X.; Tu, D. High-reflectivity surface measurement in structured-light technique by using a transparent screen. Measurement 2022, 196, 111273. [Google Scholar] [CrossRef]
  9. Fu, Y.; Fan, J.; Jing, F.; Tan, M. High Dynamic Range Structured Light 3-D Measurement Based on Region Adaptive Fringe Brightness. IEEE Trans. Ind. Electron. 2023. [Google Scholar] [CrossRef]
  10. Zhou, P.; Wang, H.; Wang, Y.; Yao, C.; Lin, B. A 3D shape measurement method for high-reflective surface based on dual-view multi-intensity projection. Meas. Sci. Technol. 2023, 34, 075021. [Google Scholar] [CrossRef]
  11. Rao, L.; Da, F. High dynamic range 3D shape determination based on automatic exposure selection. J. Vis. Commun. Image Represent. 2018, 50, 217–226. [Google Scholar] [CrossRef]
  12. Wu, K.; Tan, J.; Xia, H.L.; Liu, C.B. An Exposure Fusion-Based Structured Light Approach for the 3D Measurement of a Specular Surface. IEEE Sens. J. 2021, 21, 6314–6324. [Google Scholar] [CrossRef]
  13. Song, Z.; Jiang, H.; Lin, H.; Tang, S. A high dynamic range structured light means for the 3D measurement of specular surface. Opt. Lasers Eng. 2017, 95, 8–16. [Google Scholar] [CrossRef]
  14. Sun, X.; Liu, Y.; Yu, X.; Wu, H.; Zhang, N. Three-Dimensional Measurement for Specular Reflection Surface Based on Reflection Component Separation and Priority Region Filling Theory. Sensors 2017, 17, 215. [Google Scholar] [CrossRef] [PubMed]
  15. Li, B.Z.; Xu, Z.J.; Gao, F.; Cao, Y.L.; Dong, Q.C. 3D Reconstruction of High Reflective Welding Surface Based on Binocular Structured Light Stereo Vision. Machines 2022, 10, 159. [Google Scholar] [CrossRef]
  16. Liu, G.-H.; Liu, X.-Y.; Feng, Q.-Y. 3D shape measurement of objects with high dynamic range of surface reflectivity. Appl. Opt. 2011, 50, 4557–4565. [Google Scholar] [CrossRef] [PubMed]
  17. Sun, H.; Bi, Y. Binocular multi-line structured light matching. In Proceedings of the 13th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), Electr Network, Chengdu, China, 17–19 October 2020; pp. 345–349. [Google Scholar]
  18. Zhang, B.; Yu, J.; Jiao, X.; Lei, Z. Line Structured Light Binocular Fusion Filling and Reconstruction Technology. Laser Optoelectron. Prog. 2023, 60, 1611001. [Google Scholar] [CrossRef]
  19. Qian, X. Computational approach for optimal sensor setup. Opt. Eng. 2003, 42, 1238. [Google Scholar] [CrossRef]
  20. Shumskiy, V. GPU Ray Tracing: Comparative Study on Ray-Triangle Intersection Algorithms; Springer: Berlin/Heidelberg, Germany, 2013; pp. 78–91. [Google Scholar] [CrossRef]
  21. Li, G.-Y.; Liu, M.-G. The Summary of Differential Evolution Algorithm and its Improvements. In Proceedings of the 2010 3rd International Conference on Advanced Computer Theory and Engineering (ICACTE), Chengdu, China, 20–22 August 2010; pp. 153–156. [Google Scholar] [CrossRef]
  22. Jena, B.; Naik, M.K.; Wunnava, A.; Panda, R. A Differential Squirrel Search Algorithm; Springer: Singapore, 2021; pp. 143–152. [Google Scholar] [CrossRef]
  23. Wang, L.; Zhang, L.; Wang, J.; Yang, Y.; Li, Z.; Zhang, J. Research on Optimization of Cigarette Delivery Rout Based on Differential Evolution Algorithm. In Proceedings of the ICAIIS 2021: 2021 2nd International Conference on Artificial Intelligence and Information Systems, Chongqing, China, 28–30 May 2021; pp. 270–275. [Google Scholar] [CrossRef]
Figure 1. Specular reflection during measurement (the strong white light area on an object during specular reflection).
Figure 1. Specular reflection during measurement (the strong white light area on an object during specular reflection).
Applsci 14 01719 g001
Figure 2. Schematic diagram of a multi-angle sensor. The red dashed lines of sensors 1 and 3 indicate their respective fields of view; The blue dashed lines of sensors 2 and 4 indicate their respective fields of view.
Figure 2. Schematic diagram of a multi-angle sensor. The red dashed lines of sensors 1 and 3 indicate their respective fields of view; The blue dashed lines of sensors 2 and 4 indicate their respective fields of view.
Applsci 14 01719 g002
Figure 3. Flow chart of sensor optimization.
Figure 3. Flow chart of sensor optimization.
Applsci 14 01719 g003
Figure 4. Blade turbine. Part (a) is a real photo of the blade. Part (b) is a three-dimensional model constructed from the point cloud of the blade.
Figure 4. Blade turbine. Part (a) is a real photo of the blade. Part (b) is a three-dimensional model constructed from the point cloud of the blade.
Applsci 14 01719 g004
Figure 5. Multi-sensor. The red dashed line represents the projector’s field of view, while the blue dashed line represents the camera’s field of view.
Figure 5. Multi-sensor. The red dashed line represents the projector’s field of view, while the blue dashed line represents the camera’s field of view.
Applsci 14 01719 g005
Figure 8. Sensor number and pose optimization.
Figure 8. Sensor number and pose optimization.
Applsci 14 01719 g008
Figure 9. Optimization results, validation, and contrast. (a1a3) Optimization results of the algorithm. The red area is the specular reflection area, the blue area is the normal area, and the red circle indicates the presence of a specular reflection area. (b1b3) Verification of specular reflection by 3Dmax. (c1c3) An optimization diagram of the number and location of sensors manually selected by an expert. Bright white in (b1b3,c1c3) represents the specular reflection area.
Figure 9. Optimization results, validation, and contrast. (a1a3) Optimization results of the algorithm. The red area is the specular reflection area, the blue area is the normal area, and the red circle indicates the presence of a specular reflection area. (b1b3) Verification of specular reflection by 3Dmax. (c1c3) An optimization diagram of the number and location of sensors manually selected by an expert. Bright white in (b1b3,c1c3) represents the specular reflection area.
Applsci 14 01719 g009
Figure 10. Location of sensors. (a) Location of sensors in the algorithm. (b) Location of sensors used by experts. The black and yellow objects are the projector sensors, while the blue object is the camera sensor. The solid blue line connecting the sensor and the object indicates the direction of the sensor’s field of view.
Figure 10. Location of sensors. (a) Location of sensors in the algorithm. (b) Location of sensors used by experts. The black and yellow objects are the projector sensors, while the blue object is the camera sensor. The solid blue line connecting the sensor and the object indicates the direction of the sensor’s field of view.
Applsci 14 01719 g010
Table 1. Experimental device parameters.
Table 1. Experimental device parameters.
Blade Size (mm)Blade MaterialDistance (Projector and Object) (mm)Distance (Projector and Camera) (mm)
53 × 58 × 205Titanium900170
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, Z.; Sun, X.; Yang, X.; Xue, Q. The Automatic Algorithm of Optimizing the Position of Structured Light Sensors. Appl. Sci. 2024, 14, 1719. https://doi.org/10.3390/app14051719

AMA Style

Zhang Z, Sun X, Yang X, Xue Q. The Automatic Algorithm of Optimizing the Position of Structured Light Sensors. Applied Sciences. 2024; 14(5):1719. https://doi.org/10.3390/app14051719

Chicago/Turabian Style

Zhang, Zhiyuan, Xiaohong Sun, Xiaonan Yang, and Qi Xue. 2024. "The Automatic Algorithm of Optimizing the Position of Structured Light Sensors" Applied Sciences 14, no. 5: 1719. https://doi.org/10.3390/app14051719

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop