Next Article in Journal
Three Realizations and Comparison of Hardware for Piezoresistive Tactile Sensors
Next Article in Special Issue
Implementation of a Synchronized Oscillator Circuit for Fast Sensing and Labeling of Image Objects
Previous Article in Journal
Potentiometric Electronic Tongue to Resolve Mixtures of Sulfide and Perchlorate Anions
Previous Article in Special Issue
Analysis of RFI Identification and Mitigation in CAROLS Radiometer Data Using a Hardware Spectrum Analyser

Sensors 2011, 11(3), 3227-3248;

Gyroscope Pivot Bearing Dimension and Surface Defect Detection
Precision Opto-Mechatronics Technology, Key-Laboratory of Education Ministry, Beijing University of Aeronautics and Astronautics, Beijing 100191, China
Author to whom correspondence should be addressed.
Received: 31 January 2011; in revised form: 1 March 2011 / Accepted: 4 March 2011 / Published: 16 March 2011


: Because of the perceived lack of systematic analysis in illumination system design processes and a lack of criteria for design methods in vision detection a method for the design of a task-oriented illumination system is proposed. After detecting the micro-defects of a gyroscope pivot bearing with a high curvature glabrous surface and analyzing the characteristics of the surface detection and reflection model, a complex illumination system with coaxial and ring lights is proposed. The illumination system is then optimized based on the analysis of illuminance uniformity of target regions by simulation and grey scale uniformity and articulation that are calculated from grey imagery. Currently, in order to apply the Pulse Coupled Neural Network (PCNN) method, structural parameters must be tested and adjusted repeatedly. Therefore, this paper proposes the use of a particle swarm optimization (PSO) algorithm, in which the maximum between cluster variance rules is used as fitness function with a linearily reduced inertia factor. This algorithm is used to adaptively set PCNN connection coefficients and dynamic threshold, which avoids algorithmic precocity and local oscillations. The proposed method is used for pivot bearing defect image processing. The segmentation results of the maximum entropy and minimum error method and the one described in this paper are compared using buffer region matching, and the experimental results show that the method of this paper is effective.
illumination system; defect detection; pulse coupled neural network; particle swarm optimization; image segmentation

1. Introduction

The gyroscope is the key part of an inertial navigation system, and its performance has a direct impact on the precision of the whole system. The gyroscope pivot bearing, whose diameter is only a few millimeters and its tolerance requirements are very strict, is a key part of a floating gyroscope, Its dimensions cannot be measured by contact methods after surface polishing. Any defects on the high curvature surface of the pivot bearing ball head are only a few microns in size and their shapes vary. As a result, it is difficult to measure dimensions and detect surface defects.

At present, new detection techniques such as image-based detection techniques, have become an indispensable feature in modern industrial production. During mass industrial production operations, image-based detection ensures the consistency of product detection and helps implement data quality monitoring and process control, thereby increasing detection security, reliability, efficiency and precision, and reducing production costs. According to the different characteristics detected image-based detection applications are categorized as dimension measurement, surface quality detection, structural quality detection and system operating status monitoring [1]. Among these applications, dimension measurement and surface quality detection are the most commonly used. Dimension measurement mainly involves characteristics of a target such as appearance, shape and position. It is also used in other fields, such as detection of discontinuous arc roundness in the field of machining [2], assemblage clearance in the field of automotive industry [3], and excursion and deflection of chips in the field of electronics, such as in printed circuit board (PCB) production [4]. Surface detection mainly involves detection of defects that impact product surface quality, such as fovea, scratches, cracks, air bladders, holes, wear, roughness, texture, and burrs, such as in steel plate surface defect detection [5], surface roughness measurement [6], tunnel wall surface defect detection [7], welding seam defect detection [8], fabric surface defect detection [9], and wood defect detection [10].

In this paper an image-based detection technique used for pivot bearing dimension measurement and surface defect detection is described. Illumination system design has a direct relationship with final imaging quality, and it is one of the keys for the success of any vision detection system. Improper illumination may give rise to many problems; for example, overexposure may hide true defects, shadows may cause edge false drops, and non-uniform illumination may cause image segmentation difficulties. As a result, illumination quality can directly impact image analysis results [11]. There are multiple methods for illumination system design; as an example of an optical path-based analysis, Lu [12] determined three color ring LED light source ray angles and light source geometric parameters, which were used for detecting circuit board weld defects; as a method based on optimization techniques, Sunil [13] studied the energy function of an optimum light source position, and calculated the minimum energy function and estimated light source position by a simulated annealing algorithm; for design based on ray directions, Nicolas [14] designed a light source with the same shape as a shower head, which enhances defect contrast and detects parts surface defects with random direction scratches; as an example of design based on dynamic illumination, Ng [15] designed a moving ring light source and judged surface defects by bright ring changes on a slippery surface for the purpose of defect detection of bearing surfaces, and in an example of designs based on imaging quality analysis, Wu [16] associated image quality with measurement precision, and adjusted light illuminance by analyzing image quality to get highest measurement precision. In a word, one needs to study the light source optimization choice and design method combination for different detection tasks and work environments.

Moreover, the small proportions of the target defects relative to the entire picture in a micrograph, uneven surface illumination for high curvature surfaces and natural metal textures all contribute to make contrast of defect regions and background regions small, and image segmentation comparatively difficult. PCNN has been widely used in every field of image processing, such as denoising [17], segmentation [18], fusion [19] and feature extraction [20], but the elementary PCNN model framework is complex, and there are multiple undetermined parameters such as attenuation constants, amplification coefficients and connect coefficients. Most parameters are configured by artificial tests, which affect PCNN image processing speed and make it difficult to implement automatic image processing. Richard [21] used a genetic algorithm for setting optimum PCNN parameters, yet the key of genetic algorithms is the accurate setting of parameters such as variation and cross operator, which, if not set properly, will destroy the developmental stability. Particle Swarm Optimization (PSO) is an efficient search strategy [22], which features quick convergence and requires less parameter settings. Chao [23] used a PSO to search for the best parameter value of a generalized diffusion coefficient function that was used for anisotropic diffusion defect detection in low-contrast surface images. The PSO algorithm is used to automatically set PCNN optimization key parameters by fitness function of maxima between cluster variances, which carries out automatic PCNN image processing.

The paper is organized as follows: Section 2 introduces the structure of the gyroscope pivot bearing dimension measurement and surface detection system; Section 3 presents task-oriented illumination system design methods; Section 4 presents self-adaptive parameter settings obtained by integrating the PSO algorithm and PCNN; Section 5 describes experimental results and comparisons. Finally, some conclusions and future development are illustrated in Section 6.

2. Detection System Design

2.1. System Framework

The shape of a gyroscope pivot bearing is shown in Figure 1. The dimensions of gyroscope pivot bearings are small and any defects of its high curvature surface are at a micron-level; therefore some sort of amplificatory vision system is required, as shown in Figure 2, where the microscope is set horizontally, with coaxial and ring light sources which comprise a combination illumination system. The undetermined detection pivot bearing is installed on the combination motion platform that is composed of a three-dimension motion platform and a two-dimension revolving platform, where the Y direction and Z direction platform motion will implement pivot bearing position adjustment, which ensures that the undetermined target will be in the center of the visual field of the camera. The distance between the pivot bearing and lens is adjusted by the X direction motion platform, and this makes the lens focus clear. The two-dimension revolving platform providing horizontal and vertical circumrotation ensures the local surface ordinal will be in the visual field. Consequently it can implement the detection of all undetermined regions.

2.2. Pivot Bearing Surface Detection Policy

As shown in Figure 3, above the hemisphere of the pivot bearing there is an undetermined detection region, the B-B working face, that is detected when the rotation axis and primary optical axis overlap. The vertical revolving platform is turned 360° to detect a zone on the non-working face, when the horizontal revolving platform is turned and pivot bearing has a certain inclination angle. The vertical revolving platform is turned 360° to detect the the A-A working face, when rotation axis and primary optical axis are upright, so that the top hemisphere can be detected.

2.3. Dimension Measurements

The whole pivot bearing figure cannot be observed in one vision field when amplified, that is, only a partial edge is present in the vision field. Therefore, for dimension measurement it is necessary to move the pivot bearing and shoot images at multiple edge positions. Edge point coordinates are obtained by image processing, and the value of the length gauge is recorded when the Z direction motion platform is moved. The two edge section distances are calculated based on the edge point and recorded length gauge value. Figure 4 shows the undetermined geometry parameters. As shown in Figure 1, axis diameter is measured by shooting images at positions 1, 2, 3 and 4; ball head diameter is measured by shooting images at positions 5 and 6.

3. Illumination System Design

3.1. Task-Orientated Illumination Design Method

The vision detection illumination system design method in this paper can be summed up as follows:

  • Design of the illumination mode: First, the spatial running environment of the system is analyzed, including the effective visual field of the lens and the distance between the lens and the illumination surface, which preliminarily determines the basic structure of the illumination system. Second, undetermined illumination surface characteristics are analyzed. Different illumination modes are used based on the shapes and surface characteristics, as illustrated in Figure 5. The relatively flat and crude surface at the top left corner does not need any special illumination mode, but the geometric structure and distribution of light sources will become complex when the undetermined detection surface becomes slick bend along with rightward and downward movement.

  • Optic simulation assistance design: Uniformity is an important characteristic of light sources. Symmetrical illumination can give symmetrical gray scale images. Undetermined surface illuminance distribution may be impacted by light source structure, illumination distance and ray angles. Illumination system modeling is achieved using illumination optic theories and optic simulation software. Illuminated surface distribution is analyzed based on non-serial ray tracing of the model and the illumination effects of different illumination modes is simulated and compared.

  • Experimental research: Prepare enough testing samples, including undamaged samples, defective samples and exceptional samples; prepare standby testing light sources of different types and colors; use different types of light sources to beam on different positions of the target and observe the illumination effects.

  • Image analysis: Image quality is analyzed according to the detection task, which helps to optimize illumination system design. Selecting appropriate image evaluation methods and guidelines is very important. The estimate function may be used to evaluate image quality and guide illuminance design according to airspace or frequency characteristics of the images.

3.2. Gyroscope Pivot Bearing Illumination System Design

3.2.1. Pivot Bearing Vision Detection System

Transmission light illumination is used to measure the pivot bearing dimensions. As shown in Figure 2, a transmission light source with a condenser lens in front can enhance ray parallelism, and one can then obtain high contrast edge images. Surface detection illumination is comparatively complex, and it is the primary research content in this paper.

The detection system used in the paper contains a Zoom 6000 series lens and a Mitutoyo amplification lens; objects can be amplified 25 times and the system depth of field is only 4 μm. Each effective visual field is limited and only the central region can be clearly imaged. The size of this region is about 0.085 mm × 0.085 mm, and the corresponding region in the image is about 240 × 240 pixels. Thereinafter this region will be called the target region. This paper designs a mechanical device composed of vertical and horizontal rotatory platforms (as shown in Figure 6), which can observe each region of ball surface by rotating the two platforms.

3.2.2. Reflection Model Analysis

When rays reach an object there are three effects, reflection, transmission and absorption. Some geometric structure defects such as depressions, scratches and cracks can change the surface reflection. Surface property defects such as rust stains and blots may also cause changes in surface reflection and absorption. Any tiny structural region with defects induces regional roughness, which changes regional reflection characteristics.

Nayar compared the Beckmann-Spizzochino physical optics models with the Torrance-Sparrow geometrical optics model and proposed a unified reflectance framework for smooth and rough surfaces [24]. As shown in Figure 7, θi is the angle of the incident rays, α is the direction of the camera, and θr is main specula direction. Reflectance rays near the lens reflectance direction include the specular lobe Isl, specular spike Iss and diffuse lobe Idl, as shown in Equation (1):

I im = I dl + I sl + I ss

The diffuse reflection component is represented by the Lambertian model, as shown in Equation (2) where Kdl denotes the strength of the diffuse lobe and θi is the angle of the incident rays:

I dl = K dl   cos     θ i

Specular reflection can be denoted by the Torrance-Sparrow model due to its simpler mathematical form, as shown in Equation (3), where Ksl is the magnitude of the specular lobe, and Dk is the brae distributing function; F is the Fresnel coefficient; the geometric attenuation factor G describes the shadowing and masking effects of facets by adjacent facets:

I sl = K sl D k FG

The specular spike component is a very sharp function which is approximated by the delta function, as shown in Equation (4), where Kss is the strength of the specular spike component:

I ss = K ss δ ( θ i θ r ) δ ( ϕ r )

The main constituent of the surface reflection is judged by the relationship between the high standard deviation σh of the illuminated surface and the incident wavelength λ, as shown in (5). Franz [25] considered that the specular spike effect can be ignored when E is greater than 1.5. Therefore, the specular reflection is the major component for rough surfaces, and the specular spike can be ignored; for smooth or defective surfaces, the specular spike is the main factor considered:

E = σ h λ

The surface roughness class is 13 (Ra < 0.025 μm) after a pivot bearing surface is polished; then:

0.025 × 10 6 555 × 10 9 = 0.045 < 1.5

Therefore, this provides a specular spike model for pivot bearing regions with no defects, but roughness increases for a region with defects where specular reflection applies.

3.2.3. Pivot Bearing Illumination Mode Research

Rays pass half reflection and half pellicle mirrors in the lens, where coaxial light is sent up from and reaches the illuminated surface, as shown in Figure 8(a). Rays from the lens center reach the illuminated surface vertically, and the rays can be reflected to the lens, but non-primary optic axis incident rays cannot be reflected to the lens as incidence angles becoming bigger because of the curved illuminated surface. As a result, the brightness of the illuminated surface center is higher and decreases gradually on the surrounding surface when only coaxial light is used.

A ring light is installed on the lense when it is used as a light source, as shown in Figure 8(b), and there is a hole with diameter r1 in the light source center. Ray 1 beams on point A of the illuminated surface and the angle of incidence is α, which comes into the entrance pupil of the lens by surface reflection. The incidence angle is greater than α when rays except ray 1 beam on the arc AB, and the rays cannot come into the entrance pupil of lens by surface reflection. The rays (like ray 2) within ray 1 have smaller incidence angles for arc AB, but these rays cannot reach the illuminated surface because they are sheltered by segment MN of the lens. Although incident rays can reach arc AB, the angle of incidence is smaller than the critical angle α. As a result, no rays can come into the lens, that is, the ball crown corresponding to arc AB is always dark. When the rays (like ray 3) except ray 1 reach arc AB, some rays can come into entrance pupil of the lens when the incidence angle changes. Moreover incidence angle β decreases gradually along with point C being apart from the main coaxial O1O. Consequently, the reflected rays that come into lens increase gradually when the rays are apart from the main coaxial. Coaxial light and ring light form a complex illumination system, as shown in Figure 6, which balances the target region illumination.

3.3. Emulation and Experimental Research

3.3.1. Illuminance Uniformity Emulation

The LightTools software can be used for computer-aided design of the illumination system by illumination optics. LightTools was used to emulate and analyze surface illuminance uniformity by using the coaxial and ring light that form the complex illumination system described in this paper. The illuminated object is a ball whose diameter is 0.5 mm when the system model is built, the distance of the emergent surface of coaxial light and illumination surface is 20 mm, and the illuminated surface has specular reflection. The diameter of the ring light is equal to the lens inside diameter r1 = 30 mm, and the outside diameter is 90 mm. Twenty thousand rays are used to trace when only coaxial light is used. The illuminance diagram is shown in Figure 9(a), where the view on the left is a two dimensional grating diagram, with x-coordinates and y-coordinates denoting object size. The view on the right is a histogram, where different colors denote different illuminance classes. Illuminance of target center is higher as shown in the figure, and the further from the center the lower the illuminance.

One hundred thousand rays are used to trace when only the ring light is used. The distance of the ring light and illuminated surface is limited at 35 to 45 mm because of space limitations of the vision system. Assume that the distances are 35 mm, 40 mm and 45 mm. The illuminance diagram is shown in Figure 9(b–d). The illuminance of the target center is low; therefore, the further from the center the higher the illuminance. The illuminance diagram is shown in Figure 9(e), when complex illumination is used. Only the illuminance diagram with the distance being 35 mm is illustrated because the effect of others is similar. Illuminance is even in the target region.

Illuminance uniformity is evaluated by standard criteria of regional illuminance, and the standard criteria of a target region are listed in Table 1. The distance of ring light and illuminated surface is L using complex illumination. Illuminance uniformity is better when complex illumination is used, as standard criteria decrease and L decreases.

3.3.2. Experiment and Grey Scale Image Analysis

If direct light is used to illuminate the surface directly the camera is easily saturated due to specular reflection. Moreover, the reflexion will change because of the tiny angle changes of the light source, illuminated surface and lens. In this paper the ring light source uses scatter illumination. Grey-scale images are shown in Figure 10, with the target region framed. The bright spot in Figure 10(a,e–g) is because of the strong regional reflection caused by the irregular ball surface after machining, the gray scale of Figure 10(h) is even greater because the surface has no defects. Coaxial light and ring light illuminance are both bright field illumination. Reflected rays from defects, for example for depressions or scratches, cannot come into the lens. As a result, there is a low gray scale region that contrasts with the background. More analysis of the gray-scale images of the target region follows.

  • Target region gray scale uniformity analysis: Illuminance uniformity is estimated by gray scale uniformity U of the target region, where a lower value indicates better uniformity. As shown in Equation (7), variance and mean gray scale are denoted by Var and Ave, respectively. The computed results of Figure 10(a,e–g) are listed in Table 2, uniformity using complex illumination is higher than that for coaxial light, and it becomes better when L increases:

    U = Var Ave
    f ( I ) = x y [ I ( x + z ,     y ) I ( x ,     y ) ] 2

  • Image articulation analysis: Blurring appears to a certain extent around the target region when complex illuminance is used. In addition, the articulation of the target region will change along with the change of ring light distance L. In this paper the articulation of the target region is calculated by the Brenner function [26], which is calculated by differences of neighboring pixel gray scales, square and sum. The greater the value is, the higher the articulation is, as shown in Equation (8), where l denotes gray scale (after normalization), and z denotes pixel interval and is usually 1. Blurring will present around the target region boundary, thereby articulation is calculated in the boundary region. The region is calculated between 200 × 200 to 240 × 240 pixels from the image center. The results of Figure 10(a,e–g) are listed in Table. When L is 35 mm, the Brenner function has the greatest value, and the articulation is the best.

4. PSO-PCNN Image Processing

4.1. PCNN Mathematical Model

As shown in Figure 11, each neuron contains the input field, feedback field and pulse generator field [27]. The feedback field can receive exterior and local stimulation, and the input field can receive only local stimulation. Every neuron is connect to a neighboring field by the corresponding weight matrix and features attenuation delay, as shown in Equations (9) and (10):

F ij   ( n ) = e α F F ij   ( n 1 ) + S ij + V F kl M ijkl Y kl   ( n 1 )
L ij   ( n ) = e α L L ij   ( n 1 ) + V L kl W ijkl Y kl   ( n 1 )
where Fij is the feedback input of neuron Nij in the two-dimension neural network, and Lij is linking item, which remembers former states and has the exponential attenuation form. Ykl is the neuron output of iteration (n − 1), VF and VL are amplification coefficients of the feedback field and linking field, respectively; αF and αL are attenuation time constants of the feedback field and linking field respectively. Internal activity items are generated by non-linearity coupling modulation of feedback input using the linking field, as shown in Equation (11), the value of which determines if a neuron generates pulses. Modulation intensity is decided by linking coefficient β:
U ij   ( n ) = F ij   ( n ) [ 1 + β L ij   ( n ) ]

Pulses will be generated if the internal activity items are greater than the dynamic threshold, as shown in Equation (12):

Y ij = { 1 U ij   ( n ) > E ij   ( n 1 ) 0 other

The dynamic threshold is denoted by Equation (13), where VE and αE denote the amplification coefficient and attenuation time constant of the dynamic threshold, respectively:

E ij   ( n ) = e α E E ij   ( n 1 ) + V E Y ij   ( n 1 )
U ij   ( n ) = F ij   ( n ) [ 1 + β L ij   ( n ) ]

4.2. PCNN Model Simplification and its Image Processing

Neurons have a one-to-one relationship with image pixels, which constructs a single layered two-dimension and local connection network when PCNN is used for image processing. Shi [18] simplified the input field and recomposed the dynamic threshold to be a linearly decreased one with a constant that is calculated by derivation of the contrast (DOC). Lu [28] improved the region growing PCNN model by modifying the linking channel function and decreased the complexity of adjusting parameters.

In this paper, the basal model structure of PCNN will be simplified hereinafter. It is simplified at the input field: Fij(n) = Sij(n). The neighborhood action part is omitted in the feedback field, of which the answer to a neighbor field action is boiled down to linking coefficient action for the linking field. Lij(n) = VLkl WijklYkl(n – 1). Attenuation items are omitted in both the feedback and linking fields, which reduces the number of structure parameters and decreases computational requirements for confirming undetermined parameters. Meanwhile, the basic characteristics of the PCNN model are retained. In this case, the internal activity item is denoted by Equation (15):

U ij   ( n ) = S ij   ( n ) [ 1 + β L ij   ( n ) ]

Neurons Nij and Nkl are hypothetically linked, where exterior stimulations are Sij and Skl respectively and Sij > Skl. Initially, when neurons are not connected to each other, the greater the input value the higher ignition frequency. That is, a high gray scale pixel will ignite first. The temporary dynamic threshold of the two neurons is 0, and internal activity items are greater than the dynamic threshold. The dynamic threshold increases to VE immediately after the first ignition and pulse export. At the same time, exported pulses of the two neurons come into the linking field of each other, which increases the internal state, but the neurons will not ignite immediately as VE was set to a high value. Neuron Nij will first generate the second ignition because the exterior stimulation of neuron Nij is greater than Nkl. Meanwhile, neuron Nkl receives a pulse input from neuron Nij through the linking field, which increases the internal state to Ukl = Skl(1 + βLkl) by couple modulation. If Ukl > Ekl(n) at that time, neuron Nkl can ignite ahead of time, indicating that neuron Nkl is captured by neuron Nij. Then, the two neurons can synchronously ignite. This capture characteristic is applicable to a neuron and other neurons in its neighboring field. PCNN generates pulses as comparability swarm, which enables neurons with similar properties to synchronously ignite.

The linking weight matrix can be set to 4-connection, 8-connection or others according to actual requirements. The center pixel is affected by the distance of a pixel and its neighboring pixel, that is, if information transfer is strong from a neighboring field to the center. The nearer to the center pixel the greater the weight. Wijkl is calculated by Eyckid distance quadratic sum reciprocal of neighborhood neuron and current neuron, namely:

W ijkl = 1 ( i k ) 2 + ( j l ) 2

At the same time, neuron Nij generates pulses that can capture distant neurons by neuron transfer due to the pulse transmission characteristic of PCNN. Consequently, its segmentation result features better self-adaption than traditional threshold segmentation methods.

4.3. PSO-Based Parameter Self-Adaption

4.3.1. PSO Algorithm

The global optimum is searched in parameter space by a PSO algorithm using some particles [29]. Each particle of the population denotes a potential solution. A global optimum will be achieved and an optimal value will be obtained by information exchange among particles and iterative evolution. Assume that the position and velocity of the ith particle are represented as Xi = (Xi1,Xi2,⋯,Xid) and Vi = (Vi1,Vi2,⋯,Vid), respectively, in d-dimension search space. Particles will renovate automatically by two optimum solutions upon iteration: one is the optimal position Pi = (Pi1,Pi2,⋯,Pid) that has been found by the particle itself; the other one is the optimal position Pg = (Pg1,Pg2,⋯,Pgd) that has been located among the whole population. Each particle renovates its velocity and position by Equations (17) and (18):

v ij ( t + 1 ) = wv ij ( t ) + c 1 r 1 [ p ij x ij ( t ) ] + c 2 r 2 [ p gj x ij ( t ) ]
x ij ( t + 1 ) = x ij ( t ) + v ij ( t + 1 )
where c1 and c2 are learning factors, t is the number of iterations, r1 and r2 are two random numbers within the range of 0 to 1, and w is an inertia factor. The linearity reduced inertia factor is used to avoid PSO algorithm precocity and oscillation near the global optimal solution, as shown in Equation (19):
w = w max t w max w min M num
where t denotes the current iteration number, and Mnum denotes the total iteration number. The inertia factor decreases linearly from the maximum to the minimum. It is propitious to leave from the local minimal point and easy for global search when the inertia factor is great and propitious for accurate local search and algorithm convergence when the inertia factor is small.

4.3.2. PSO-PCNN Parameters Configuration

The number of undetermined parameters for the simplified PCNN model has been greatly decreased, with only β, Wijkl, VL, VE and αE left. VL, Wijkl and αE have lower impact on segmentation results; β and VE have higher impact, and need to be set differently for different images. Therefore, the two key parameters are optimized by the PSO algorithm in the paper. The maximum cluster variance denotes a low probability of background misclassification. The maximum between cluster variance rules is used as the fitness function in this paper. It is defined as follows:

σ 2 = p 0 ( μ 0 μ T ) 2 + p 1 ( μ 1 μ T ) 2
where p 0 = p i A p i, p 1 = p i B p i = 1 p 0, A denotes the target region of binary image after PCNN segmentation, B denotes the corresponding background region, pi denotes the probability of each gray scale, and μ0, μ1, and μT denote the mean gray scale of the target region, background region and image respectively.

The key parameters β and VE of PCNN are searched by the PSO algorithm with a linearity reduced inertia factor. The procedure is as follows:

  • Initialize the position and values of each particle in the population, Pi = (β, VE);

  • Compute the fitness value of each particle. The particle position vector is imported to the PCNN model, and image segmentation is executed. The output binary image is mapped to the target of the original image and background region. Compute the variance between clusters as fitness value using Equation (20);

  • The position and fitness value of current particle are saved in the individual best position (pBest), and the position and fitness value of all the pBest are saved in the article population best position (gBest);

  • Update the inertia factor w using Equation(19);

  • Update the velocity v ij ( t + 1 ) and position x ij ( t + 1 ) of each particle using Equations (17) and (18);

  • If the fitness value of a particle is better than a value with the best historical position, set position pBest as the current position for the particle; if a fitness value of particle population is better than the best historical position among the population, set position gBest as the current position for the particle;

  • Assume that t = t + 1. Return to step (2), until t = Mnum;

  • The best solution position is gBest by iteration, and the optimizing β and VE are used for PCNN image segmentation.

5. Experiments

5.1. Experiment One: Dimension Measurement

  • Axis diameter measurement: Axis diameter measurement can be considered as a distance measurement between parallel lines, and the premise is fitting parallel lines by obtaining the two groups of edge data. Edge points after image processing are processed by least squares fitting to obtain two line equations, which makes the two line slopes equal. Then the distance of two lines will be calculated. Point numbers of the two lines are denoted by n1 and n2, point sets of the two lines are P1 and P2, and the line equations are:

    x sin     θ y cos     θ + d 1 = 0 x sin   θ y   cos     θ + d 2 = 0
    The target function is constructed by least square fitting as shown in Equation (22):
    E ( θ ,   d 1 ,   d 2 ,   ,   d m ) = ( x i ,   y i ) P 1 [ x i   sin   θ y i   cos   θ + d 1 ] 2 + ( x i ,   y i ) P 2 [ x i   sin   θ y i   cos   θ + d 2 ] 2 x ¯ j = ( x i ,   y i ) P j x i ,   y ¯ j = ( x i ,   y i ) P j y i ,   ( j = 1 ,   2 ) ,         ( i = 1 ,   2 ,   ,   n j ) ,  
    a j = ( x i ,   y i ) P j x i 2 x ¯ j 2 ( x i ,   y i ) P j y i 2 + y ¯ j 2 j = 1 ,   2 ,     ,           i = 1 ,   2 ,   ,   n j b j = ( x i ,   y i ) P j x i y i x ¯ j y ¯ j
    Assuming: α = j 1 m a j, b = j 1 m b j, (j = 1,2), then α sin 2θ − 2b cos 2θ = 0, when b≠0, tan   θ = a + a 2 + 4 b 2 2 b, using dj = j cos θj sin θ, (j = 1,2), obtain dj, and two parallel line equations. Then the distance of the two lines will be obtained, which is the axis diameter.

  • Ball head diameter measurement: Two arc segments cannot form a perfect circle due to the machining course of a ball. Therefore, the ball head diameter is calculated by averaging multiple results of the maximum distance of the left and right arc segments.

  • Experimental results: Ten pivot bearings are measured using the system for validating measurement methods. Compareison of the results with the results obtained by contact measurement using length measuring instruments (measurement accuracy is 0.0002 mm). Table 3 shows that the difference of measurement results is less than 0.001 mm.

5.2. Experiment 2: Defect Image Detection

There are many reasons that may cause pivot bearing surface defects, for example non-uniform material structure, lapping stress and polishing, which all result in different defect dimensions and modes. Images of pivot bearing surface defects are segmented for validating the algorithm put forward in this paper. VL and αE in the PCNN model are 1, Wijkl is set by Equation (16), where the size of the neighbor field is 5 × 5. PSO inertia factors in the experiment are wmax = 0.9 and wmin = 0.4; the iteration number is Mnum = 20; the number of particle populations is N = 10; the learning factor is c1 = c2 = 2, as suggested by Shi [30]. Undetermined optimal PCNN parameters β and VE are initialized in the PSO algorithm, and the optimum solution will be used for the PCNN parameters by an iteration search. The experiments were conducted on an Intel Pentium 4 CPU 3.0 GHz personal computer. Processing time of the PCNN algorithm is about 3 s on average. Image processing results for various defect images using methods in this paper are compared with others as shown in Figure 12. It shows rust stains, macula, coarse threads and depressions from top down, and original image and processing results of this paper, maximum entropy and minimum errors from left to right.

Sunil [31] used the buffer region matching method [32] to estimate segmentation results of concrete infrastructure crack images. We use for reference Sunil’s method to objectively estimate segmentation results in this paper. The algorithm flow chart is shown in Figure 13. Buffer regions are formed by 3 × 3 morphological dilate operation for defect regions that are extracted by artificial processing, and then defects that have been segmented are matched and compared with the buffer region. The pixels are denoted as S1 inside the buffer region and as S2 outside that region. Similarly, buffer regions are formed by dilate operation for defect targets that are extracted by segmentation methods, and those are compared with defects that are extracted by artificial processing. The pixels are denoted as S3 and S4 inside and outside the buffer region, respectively. Image segmentation are estimated by three estimating factors in Equations (2426), where C denotes Correctness that is the correct degree of the defect target region as shown in Equation (24); I denotes Integrality that is the overlay degree of artificial distilling defects by segmentation processing as shown in (25); Q denotes Quality, which is a synthetic estimation of correctness and integrality as shown in Equation (26).

C = S 1 S 1 + S 2
I = S 3 S 3 + S 4
Q = S 1 S 1 + S 2 + S 4
S = k 1 C + k 2 I + k 3 Q

The ideal value of the three estimation factors is 1. A value closer to 1 indicates higher performance. The estimation factor results of pivot bearing surface defect processing are listed Table 4, where the three estimation factors for coarse thread and macula processing results in this paper are better than the other two.

The correctness of rust stains processing results is not the best, but the integrality and quality are better. Integrality of fovea processing is not as good as maximum entropy, but the correctness and quality are better. The synthetic estimation factor is calculated by Equation (27) that is proposed in this paper to synthetically compare the three algorithm processing results for different defects, where k1, k2, k3 are the coefficients of the three estimation factors, and k1 = k2 = k3 = ⅓ in this paper. Results are shown in Table 5. Comparing the results for different defects, the PCNN processing results in this paper are better than the other two, and that proves the methods in this paper can be used for defect image segmentation.

6. Conclusions

An image analysis-based vision detection system aimed at gyroscope pivot bearing dimension measurement and surface defect detection is described in this paper. It implements pivot bearing axis diameter and ball head diameter measurement and surface defect detection in one instrument. Illumination has a direct effect on imaging quality and detection results. Therefore, stepwise design, simulation, experiment and analysis are used to propose an illumination system design method. Detection target characteristics and detection requirements are both considered, and the illumination model is designed according to the system environment. Illuminance uniformity is simulated and image results are analyzed by experimental research, which optimize illumination system design. Complex illumination is composed by a coaxial light and a ring light source with the purpose of gyroscope pivot bearing surface defect detection, which enhance illuminance uniformity and image articulation of target regions. Furthermore, the PSO-based PCNN method is proposed in this paper to process pivot bearing defect surfaces, and two key PCNN parameters, connect coefficient and dynamic threshold, are optimized by the PSO algorithm using the maximum cluster variance as a fitness function. Linearity reduced inertia factor is adopted to avoid PSO algorithmic precocity and oscillation near the global optimal solution, which implements a self-adaptive PCNN parameter setting. Buffer regions matching the estimated method for segmentation results prove that the methods in this paper can be used for image segmentation. In addition, iterative computation is required for both PSO and PCNN, therefore how to improve the speed of PSO algorithm-based optimal computation for PCNN parameters still needs further study.


This paper is supported by Changjiang Scholars and Innovative Research Team in University (Grant No. 0705), and partially supported by the National Science Foundation of China (Grant No. 60802044).


  1. Elias, NM; Euripides, GM; Petrakis, MZ. A Survey on Industrial Vision Systems, Applications and Tools. Image Vis. Comput 2003, 21, 171–188. [Google Scholar]
  2. Chen, MC. Roundness Measurement for Discontinuous Perimeters via Machine Vision. Comput. Indust 2002, 4, 185–197. [Google Scholar]
  3. Dimitrios, K; Heodora, V. Automated Inspection of Gaps on the Automobile Production Line Through Stereo Vision and Specula Reflection. Comput. Indust 2001, 46, 49–63. [Google Scholar]
  4. Yih, CC; Chern, SL. The Feature Extraction and Analysis of Flaw Detection and Classification in BGA Gold-Plating Areas. Expert Syst. Appl 2008, 35, 1771–1779. [Google Scholar]
  5. Ahmed, R; Sutcliffe, MPF. Identification of Surface Features on Cold-Rolled Stainless Steel Strip. Wear 2000, 244, 60–70. [Google Scholar]
  6. Franz, P. Detection of Surface Defects on Raw Steel Blocks Using Bayesian Network Classifiers. Patt. Anal. Appl 2004, 7, 333–342. [Google Scholar]
  7. Seung, NY; Jae, HJ. Auto Inspection System Using a Mobile Robot for Detecting Concrete Cracks in a Tunnel. Autom. Const 2007, 16, 255–261. [Google Scholar]
  8. Romeu, R; Da, S. Estimated Accuracy of Classification of Defects Detected in Welded Joints by Radiographic Tests. NDT&E Int 2005, 38, 335–343. [Google Scholar]
  9. Kumar, A. Neural Network Based Detection of Local Textile Defects. Patt. Recog 2003, 36, 1645–1659. [Google Scholar]
  10. Funck, JW; Zhong, Y. Image Segmentation Algorithms Applied to Wood Defect Detection. Comput. Elect. Agr 2003, 41, 157–179. [Google Scholar]
  11. Yang, G; Gains, JA; Nelson, BJ. A Supervisory Wafer-Level Microassembly System for Hybrid MEMS Fabrication. J. Intell. Rob. Syst 2003, 37, 43–68. [Google Scholar]
  12. Lu, SL; Zhang, XM. Analysis and Optimal Design of Illuminator for Lead Fess Tin Solder Joint Inspection. Opt. Precis. Eng 2008, 16, 1376–1383. [Google Scholar]
  13. Sunil, KK. Lighting Design for Machine Vision Application. Image Vis. Comput 2006, 24, 720–726. [Google Scholar]
  14. Bonnot, N; Seulin, R; Merienne, F. Machine Vision System for Surface Inspection on Brushed Industrial Parts. Proc. SPIE 2004, 5303, 64–72. [Google Scholar]
  15. Ng, TW. Optical Inspection of Ball Bearing Defects. Measur. Sci. Technol 2007, 18, 73–76. [Google Scholar]
  16. Wu, WC; Zhao, H; Liu, WW. Effects of Illumination on Image Quality in Precision Vision Measurement. J. Shanghai Jiaotong Univ 2009, 43, 391–395. [Google Scholar]
  17. Ji, L; Yi, Z. A Mixed Noise Image Filtering Method Using Weighted-Linking PCNNs. Neurocomputing 2008, 71, 2986–3000. [Google Scholar]
  18. Shi, MH; Jiang, SS; Wang, H. A Simplified Pulse-Coupled Neural Network for Adaptive Segmentation of Fabric Defects. Mach. Vision Appl 2009, 20, 131–138. [Google Scholar]
  19. Yang, SY; Wang, M; Lu, YX. Fusion of Multiparametric SAR Images Based on SW-Nonsubsampled Contourlet and PCNN. Sign. Process 2009, 89, 2596–2608. [Google Scholar]
  20. Gu, XD. Feature Extraction Using Unit-Linking Pulse Coupled Neural Network and its Applications. Neural Process. Lett 2008, 27, 25–41. [Google Scholar]
  21. Richard, E; Michael, R; Michele, B. Using a Genetic Algorithm to Find an Optimized Pulse Coupled Neural Network Solution. Proc SPIE 2008, 6979, 69790M-1–69790M-8. [Google Scholar]
  22. Kennedy, J; Eberhart, RC. Particle Swarm Optimization. Proceedings of the 1995 IEEE International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; pp. 1942–1948.
  23. Chao, SM; Tsai, DM. Anisotropic Diffusion with Generalized Diffusion Coefficient Function for Defect Detection in Low-Contrast Surface Images. Patt. Recogn 2010, 43, 1917–1931. [Google Scholar]
  24. Nayar, SK; Ikeuchi, K; Kanade, T. Surface Reflection: Physical and Geometrical Perspectives. IEEE Tran. Patt. Anal. Mach. Intell 1991, 13, 611–634. [Google Scholar]
  25. Franz, P; Paul, O. Visual Inspection of Machined Metallic High-Precision Surfaces. EURASIP J. Appl. Sign. Process 2002, 7, 667–678. [Google Scholar]
  26. Firestone, L; Cook, K; Culp, K. Comparison of Autofocus Methods for Automated Microscopy. Cytometry 2001, 12, 195–206. [Google Scholar]
  27. Johnson, LJ; Padgett, ML. PCNN Models and Applications. IEEE Trans. Neural Netw 1999, 2, 480–498. [Google Scholar]
  28. Lu, YF; Miao, J; Duan, LJ; Qiao, YH; Jia, RX. A New Approach to Image Segmentation Based on Simplified Region Growing PCNN. Appl. Math. Comput 2008, 205, 807–814. [Google Scholar]
  29. Parsopoulos, KE; Vrahatis, MN. Recent Approaches to Global Optimization Problems through Particle Swarm Optimization. Nat. Comput 2002, 1, 235–306. [Google Scholar]
  30. Shi, Y; Eberhart, R. A Modified Particle Swarm Optimizer. Proceedings of the IEEE International Conference on Evolutionary Computation, Anchorage, AK, USA, 4–9 May 1998; pp. 69–73.
  31. Sunil, KS; Paul, WF. Automated Detection of Cracks in Buried Concrete Pipe Images. Autom. Construct 2006, 15, 58–72. [Google Scholar]
  32. McGlone, C; Shufelt, J. Projective and Object Space Geometry Formonocular Building Extraction. Comput Vis Patt Recogn 1994. [Google Scholar] [CrossRef]
Figure 1. Drawing of a gyroscope pivot bearing.
Figure 1. Drawing of a gyroscope pivot bearing.
Sensors 11 03227f1 1024
Figure 2. Gyroscope pivot bearing vision detection system.
Figure 2. Gyroscope pivot bearing vision detection system.
Sensors 11 03227f2 1024
Figure 3. Detection sketch map for different working faces of pivot bearing surface.
Figure 3. Detection sketch map for different working faces of pivot bearing surface.
Sensors 11 03227f3 1024
Figure 4. Sketch map of dimensions measurement.
Figure 4. Sketch map of dimensions measurement.
Sensors 11 03227f4 1024
Figure 5. Relationship of light source selection and detected surface.
Figure 5. Relationship of light source selection and detected surface.
Sensors 11 03227f5 1024
Figure 6. Framework map of the detection system.
Figure 6. Framework map of the detection system.
Sensors 11 03227f6 1024
Figure 7. Reflection model.
Figure 7. Reflection model.
Sensors 11 03227f7 1024
Figure 8. Light source design. (a) Coaxial light, (b) Ring light source.
Figure 8. Light source design. (a) Coaxial light, (b) Ring light source.
Sensors 11 03227f8 1024
Figure 9. Illumination diagram: (a) coaxial light only; (b), (c), (d) ring light only, L is 35 mm, 40 mm, 45 mm in turn; (e) complex illumination.
Figure 9. Illumination diagram: (a) coaxial light only; (b), (c), (d) ring light only, L is 35 mm, 40 mm, 45 mm in turn; (e) complex illumination.
Sensors 11 03227f9 1024
Figure 10. Gray scale image of target region: (a) coaxial light only, (b), (c), (d) ring light only, L is 35 mm, 40 mm, 45mm in turn, (e), (f), (g) combining light, L is 35 mm, 40 mm, 45 mm in turn, (h) no defect.
Figure 10. Gray scale image of target region: (a) coaxial light only, (b), (c), (d) ring light only, L is 35 mm, 40 mm, 45mm in turn, (e), (f), (g) combining light, L is 35 mm, 40 mm, 45 mm in turn, (h) no defect.
Sensors 11 03227f10 1024
Figure 11. PCNN neural structure drawing.
Figure 11. PCNN neural structure drawing.
Sensors 11 03227f11 1024
Figure 12. Result of image processing, (a) Original image with defect, (b) Result of this paper, (c) Result of maximum entropy, (d) Result of minimum error.
Figure 12. Result of image processing, (a) Original image with defect, (b) Result of this paper, (c) Result of maximum entropy, (d) Result of minimum error.
Sensors 11 03227f12 1024
Figure 13. Flowchart of buffer region matching.
Figure 13. Flowchart of buffer region matching.
Sensors 11 03227f13 1024
Table 1. Standard criteria for target region illuminance.
Table 1. Standard criteria for target region illuminance.
Coaxial lightCombining illumination
L = 35 mmL = 40 mmL = 45 mm
Table 2. Uniformity and articulation of image in different illuminance modes.
Table 2. Uniformity and articulation of image in different illuminance modes.
Illuminance modeUBrenner
Coaxial light3.581.14
Combination lightL = 351.174.37
L = 401.333.85
L = 451.492.51
Table 3. Measurement results of axis diameter and ball head diameter (mm).
Table 3. Measurement results of axis diameter and ball head diameter (mm).
Workpiece number12345678910
Axis diameterSystem measurement3.00153.00163.00153.00133.00113.00163.00063.00123.00013.0018
Length measuring instruments3.00183.00113.00193.00163.00083.00123.00043.00153.00053.0016
Ball head diameterSystem measurement0.49440.4950.49680.49710.49670.49850.49780.49680.49360.4935
Length measuring instruments0.49410.49550.49650.49690.49650.49890.49850.4970.49390.494
Table 4. Result of buffer region matching estimate.
Table 4. Result of buffer region matching estimate.
DefectSegmentation MethodCIQDefectSegmentation MethodCIQ
Rust stainsThis paper0.9710.9830.955MaculaThis paper0.980.8770.792
Minimum error0.9790.8410.8Minimum error0.9660.7140.597
Maximum entropy0.9870.9080.889Maximum entropy0.9210.8440.773
Coarse threadThis paper0.980.8670.792FoveaThis paper0.9730.9850.963
Minimum error0.9310.8440.773Minimum error0.9790.9480.927
Maximum entropy0.9760.7140.597Maximum entropy0.8820.9910.876
Table 5. S estimation factor results.
Table 5. S estimation factor results.
Rust stainsMaculaCoarse threadFovea
This paper0.96970.8830.87970.9736
Minimum error0.87330.7590.84930.9513
Maximum entropy0.9280.8460.76230.9163
Back to TopTop