Next Article in Journal
The Uroprotective Efficacy of Total Ginsenosides in Chinese Ginseng on Chemotherapy with Cyclophosphamide
Next Article in Special Issue
A Semi-Supervised Learning Approach for Automatic Detection and Fashion Product Category Prediction with Small Training Dataset Using FC-YOLOv4
Previous Article in Journal
An Investigation on Radiomics Feature Handling for HNSCC Staging Classification
Previous Article in Special Issue
Application of Efficient Channel Attention Residual Mechanism in Blast Furnace Tuyere Image Anomaly Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Particle Swarm Optimisation in Practice: Multiple Applications in a Digital Microscope System

1
Ash Technologies Ltd., B5, Newhall, M7 Business Park, W91 P684 Co. Kildare, Ireland
2
Institute of Computer Science, University of Tartu, 51009 Tartu, Estonia
3
Department of Computer Science and Information Systems, University of Limerick, Castletroy, V94 T9PX Limerick, Ireland
4
School of Computer Science and Informatics, De Montfort University, The Gateway, Leicester LE1 9BH, UK
*
Author to whom correspondence should be addressed.
Previously at Cyber Technology Institute, School of Computer Science and Informatics, De Montfort University, The Gateway, Leicester LE1 9BH, UK.
Appl. Sci. 2022, 12(15), 7827; https://doi.org/10.3390/app12157827
Submission received: 20 July 2022 / Revised: 1 August 2022 / Accepted: 2 August 2022 / Published: 4 August 2022
(This article belongs to the Special Issue Applications of Deep Learning and Artificial Intelligence Methods)

Abstract

:
We demonstrate that particle swarm optimisation (PSO) can be used to solve a variety of problems arising during operation of a digital inspection microscope. This is a use case for the feasibility of heuristics in a real-world product. We show solutions to four measurement problems, all based on PSO. This allows for a compact software implementation solving different problems. We have found that PSO can solve a variety of problems with small software footprints and good results in a real-world embedded system. Notably, in the microscope application, this eliminates the need to return the device to the factory for calibration.

1. Introduction

Computational intelligence has been a major field of research. This includes stochastic optimisation methods, such as those from the fields of evolutionary computing and swarm intelligence [1,2]. The latter optimisation paradigms feature several heuristics that can be used over black-box problems, or any other challenging optimisation problem for which using an exact method would not be viable or time-efficient. For this reason, such heuristics approaches do not only represent an interesting fundamental research topic, but have also become very popular among practitioners benefiting from their wide-scope applicability in several domains of sciences, engineering and industry [3,4,5,6]; however, in the vast majority of cases, the literature reporting on such application studies is written from an academic point of view, i.e., highlighting novelty and technical aspects, often overlooking the impact created by the proposed technology in industry in more practical terms. In this study, we instead focus on showing that the introduction of well-known heuristic optimisation in technological devices can improve some characteristics of the product, making it not only more efficient, but also more usable, thus improving upon customer satisfaction. For this reason, we selected an established and simple heuristic for optimisation, namely the particle swarm optimisation algorithm (see Section 2.2 for details), and used it to address feedback from customers on a digital microscope/inspection system for medical applications produced by Ash Ltd. Through customer feedback, four areas where improvements would help in increasing the quality and simultaneously reduce costs were identified. We aim to show that, if areas of improvements are well formulated in the fashion of the optimisation problem, even embedding only one simple general-purpose algorithm within the device (we present the details in Section 2.1) can make a major difference. Moreover, we present the production-ready solutions for these four problems, consisting of
  • An object height measurement task (Section 3);
  • An image stitching for high-resolution images task (Section 4);
  • An accurate colour reproduction task (Section 5);
  • An image distortion correction task (Section 6).
Note that using a single general-purpose optimiser for the four problems is preferable in this specific context. Advantages from this approach are multiple, including that only a single software implementation of an optimisation algorithm needs to be developed, that the software/firmware is easier to maintain, that porting the produced software to different platform is faster, but also that training engineers becomes simpler, faster and less costly. It must be also pointed out that the four addressed tasks are measurement problems. As the digital microscope is generally used in a wide variety of applications by the customers who decide to purchase it, these measurements processes might be very different; therefore, optimising algorithms to a particular dataset to gain marginal improvements is not the aim. In fact, this may cause deterioration of results on other datasets [7]. This further supports our approach, which is also consistent with the tenets of Industry 4.0, in providing bespoke solutions to practical problems, in an efficient and inexpensive manner, both resource wise and computationally.

2. Materials and Methods

2.1. Omni 3 Digital Microscope Inspection and Measurement System

The Omni 3 is a system for inspection and measurement purposes. It is based on digital imaging technologies and includes a zoom lens with a magnification range between 2.5 and 672. The system has integrated software applications which eliminate the need for an external computer. Those applications include:
  • Fast autofocus function;
  • Image stitching;
  • Image stacking;
  • 2D and 3D measurement and graticules;
  • Image comparator;
  • Object counting;
  • Colour analysis;
  • 2D measurement functions including point to point, polyline, circle and rectangles.
The system achieves a 2D measurement accuracy of +/−1% and a Z-height accuracy of 100 μm. A full description of the system can be found at [8].
All software was developed using OpenCV [9] and the Qt C++ framework [10]. Other software and libraries used are outlined in the sections where they are used.
For an overview of the field of metrology and a literature review, we refer the reader to [11].

2.2. Particle Swarm Optimisation

Particle swarm optimisation [12,13] is a swarm intelligence metaheuristic for optimising continuous nonlinear problems defined on an n-dimensional search space D R n (characterised by n lower and upper bounds, i.e., l i and u i with i = 1 , , n , in this work). Its popular working mechanism is quite simple, and it is iterated until a stop condition is met—with the most used stop criteria being a fixed number of objective functional calls, usually expressed in terms of maximum number of iterations I max , failing at improving upon the objective value of the best ever found solution x G B for a number F max of iterations, or the achievement of a satisfactory level of accuracy for the solution (measured with a threshold τ ).
Each iteration consists of the perturbation, and interaction, of a number of particles N P associated with their current position in the search space x j D (i.e., the candidate solution), their velocity v R n , and the‘personal best’ ever explored position x j P B (with j = 1 , , N P ). Note that a solution is said to be better, or to outperform another one, if its objective function value is lower while facing a minimisation problem—the opposite for a maximisation one. The same implementation of a PSO can be used for both minimisation and maximisation processes by simply multiplying the objective function f ( ) by 1 as needed. A complete iteration requires calling f ( ) for N P times as each solution x (which is initially randomly generated in D) must be perturbed to attempt to explore a new position whose functional value outperforms the one of the global best solution x G B to update it. This is achieved by applying the following operators to the the velocity and position vectors of the N P particles:
v j ( n e w ) = w v j + r 1 x j P B x j + r 2 x j G B x j
x j ( n e w ) = x j + v j ( n e w )
where w is the ‘inertia’ weight (controlling the exploratory step size [14]) while r 1 and r 2 are random weights uniformly drawn within [ 0 , c 1 ] and [ 0 , c 2 ] , respectively. The acceleration constants c 1 and c 2 are user-defined to have more pressure towards local best solutions, the global best solution or to achieve a balanced search. More details can be obtained from [15]. Generally employed settings from the literature see c 1 = c 2 = 2 as good choices for several applications [12], while w = 0.7 is a quite established value, even though adaptive formulas for dynamically adjusting w, see, e.g., w [ 0.4 , 0.9 ] [16,17], as well as for adjusting the acceleration constants, see, e.g., [18,19,20], also exist. Note that the velocity vectors, which act more as a displacement vector for the current position, are usually initialised at random within D (small values are to be preferred [21]) can then assume any arbitrary value. This may lead to the so-called ‘velocity explosion’ undesired effect, which should be mitigated, e.g., as shown in [22], and by avoiding a too small swarm size [23]. Our implementation controls the velocity value via the ‘clamping’ method from [22]. Finally, we employ the ‘absorbing walls’ method in [24] to deal with infeasible solutions that might be generated by the PSO logic during the search for optima. This consists of saturating the infeasible positions of a generic jth particle (i.e., those components of x j falling outside the boundaries) to the closest ‘violated’ boundary re-initialising the corresponding velocity vector v j with zeros.
This simple working logic, together with good performances over benchmark and real-world problems, made PSO popular and widely used by practitioners, who use it over a variety of application fields such as engineering [25,26] and health-care, including environmental, industrial and commercial applications and many other general optimisation tasks [27,28,29]. PSO variants for discrete domains have also been proposed to tackle, e.g., planning and scheduling problems [30].

3. Accurate Objects’ Height from Focus Measurement

The conventional approach to determine the height of a generic object under inspection with the digital microscope is based on the fixed single thin lens optical system model. This features a linear relation between the distances d o and d i (from the object and the image to the lens, respectively), as formulated in Equation (3) where f is the focal length of the lens [31].
1 f = 1 d o + 1 d i
This simple equation makes it easy to determine the distance from the camera to the object by simply moving the camera towards and away from the object and finding the best in focus image [32]. Note that in cameras in practical use, things become more complex. This is because they require multiple lenses and also due to the fact that properties of the lenses are generally confidential, and that stepper motors used to move the internal focus lens do not have a linear relationship with the focus distance from the camera. Hence, the 0–25 mm admissible range of our system was initially discretised in 26 checkpoints, by moving the camera with a 1 mm step, to generate a lookup table matching corresponding heights of the object and peak focus positions of the camera. The calibration jig for performing this process is shown in Figure 1. This is used to move the camera in the vertical direction along a linear guide rail whose lead screw is turned by the stepper motor. Each movement of the camera, whose height is recorded through 10 μm resolution linear encoder, is equivalent to changing the height of the test sample. All communication from the PC software to the stepper motor occurs through RS232 ports.
Using the lookup table to estimate the height of an object proved to be problematic for two main reasons. First, measuring focus is a noisy process, which often returns poor peak focus measurements. In turn, the latter leads to unreliable height values. Indeed, while moving the camera, a built-in function (set up in auto focus mode) is called to find the peak focus energy with the Laplacian operator method [33] to: determine the best focus position; read the focus motor position value; generate a table of ‘focus value vs. object height’ data; however, incorrectly returned position values negatively impact on the calibration process, thus introducing errors. We empirically confirmed this by noticing that the peak focus energy we measured clearly differed from the peak of the data when a polynomial is fitted to the data. Additionally, significant outliers in calibration data are likely to be present due to vibrations and changes in lighting conditions on the factory floor. Second, it requires re-calibrating the system accurately due to non-consistent optical/mechanical characteristics across cameras. This is achieved by returning the microscope back to the factory, where the previously stored 26 checkpoints are used to perform the camera re-calibration process. As the 26 points cannot always represent the ideal baseline, this approach is still prone to poor re-calibration outcomes. Obviously, having to ship the system for re-calibration adds costs and delays, and it is undesirable considering that these systems are expected to be in constant use. Hence, a built-in automatic calibration routine capable of mitigating the noisy component of the focus measurement would significantly and simultaneously improve performance, customer experience and value for money.
To overcome these problems, we present a self-calibrating approach where uncertainty is mitigated. Instead of using the 26 noisy measured peaks, we augmented the data-set to make it more robust and accurate, and we found the optimal coefficients for a polynomial fitting them, thus being able to generate the required lookup table with a smoother function and with any preferred number of points (this approach is known to reduce noise [34]).
In the following subsection, we first show the experimentation performed to increase the sample size from 26 to 260 samples (Section 3.1), to then show how we determined the order of the polynomial function to best fit that augmented data set (Section 3.1.2). Once found, such coefficients are to be stored onto each microscope before shipping it to customers and the firmware of the microscopes is also upgraded to exploit the polynomial function for performing automatic re-calibration and determine the height of the object accurately without the need of returning it to the factory.

3.1. Optimised Polynomial Fitting

We make use of the PSO algorithm, as introduced in Section 2.2, to optimise the polynomial interpolator so that the mean squared error (MSE) between observed values x k and predicted values x ^ k , as defined in Equation (4), is minimised.
MSE = k = 1 S Z x i x ^ k 2

3.1.1. Experimental Set-Up

We performed a preliminary experiment to determine the most appropriate number of points, i.e., the sample size S Z , to use for evaluating the MSE objective function. For this, we used synthetic data obtained by generating a ground truth of height samples from a 2nd order polynomial setting of coefficients a 0 = 3.4 , a 1 = 0.022 and a 2 = 0.00003 ( ± 2 steps of uniform noise in the range of 0 to 2 is added to the ground truth data).
To obtain the best polynomial function match, we ran the PSO algorithm (parameters are empirically tuned on the smallest S Z returning the minimum error ) set with N P = 50 ; c 1 = 1.50 ; c 2 = 0.30 ; w = 0.92 ; velocity components clamping within [ 0.05 , 0.05 ] ; I max = 5000 ; but with increasing height samples S Z , from 25 to 1000. The setup for changing S Z is shown in Figure 1. For each height sample size, 10 independent runs of the PSO were performed. This process returns the polynomials functions required to calibrate the systems. As this is a measurement system, its accuracy is determined by its maximum error. Hence, for each run, we also record the corresponding max step error and compute their average. As evident from Table 1, it could be noted that this averaged max error (which is expected to be smaller when the sample size is higher) stopped decreasing at S Z = 250 . Hence, we use and suggest this sample size (plus 10 extra points to allow for a margin of error) for fitting the polynomials with the PSO algorithm (i.e., we work with S Z = 260 ).

3.1.2. Validation on Real Data

Our method was validated experimentally. We employed a 3rd, a 4th and a 5th order polynomial function to understand if higher order interpolators led to better results. Note that these require from 4 up to 6 coefficients a i [ l i , u i ] , with i = 0 , 1 , 2 , 3 , 4 , 5 . The corresponding search spaces for the problems are indicated via the following lower bound vector l = [ 400 , 0 , 3 , 0.003 , 0.00003 , 0.000001 ] and upper bound vector u = [ 0 , 50 , 3 , 0.003 , 300,000 , 1,000,000 ] . Before applying the PSO approach, a camera was randomly selected and calibrated five times using accurate data measured in the factory to produce the ‘ground truth’ described in Section 3.1.1. Results showing the height of the camera (in terms of steps) are reported in Table 2. These show errors varying from 3.38 steps to 7.65 steps. Subsequently, we fit the three polynomial functions with the PSO algorithm and used it to add calibration points as previously explained. The second approach led to better results, in particle when using the 4th order polynomial function, which led to a reduction in the error to the range of 0.24–0.53 steps.
It must be remarked that the proposed approach, with the suggested settings for the PSO and the use of a 4th order polynomial for the regression problem at hand, has shown to be very robust also in presence of outliers. Indeed, we replicated the experimental phase by manually adding five outliers in the calibration data and were able to achieve the 0.24 steps error after the optimisation process.
As we make use of a heuristic method for fitting the polynomial function, we also compared the performances of the employed PSO to those obtained of another state-of-the-art stochastic optimisation algorithm, namely the CMAES [35]. The latter has a higher computational complexity and it has been applied to five data sets acquired from a single camera run through the calibration process and compared to the PSO when fitting the 4th order polynomial function to the data set; however, as shown in Table 3, only very marginal improvements were registered on the objective functions. Such small variations do not lead to better calibration processes, in favour of using PSO over CMAES due to its faster and simpler algorithmic structure.

4. Image Stitching

Image stitching is a method of blending two or more overlapped images into a higher resolution image. The requirement for image stitching can arise when an object is larger than the field of view of the microscope or when high-resolution images of an object are needed for quality assurance. Figure 2 shows an example of two images of the same object taken with overlapping regions. In order to blend these images together the exact region of overlap needs to be found. In microscopy applications where the sample is moved manually, translation and rotation are required. When using a manual X Y table only translation is required.
The established methods for solving this problem largely fall into two categories, feature point matching and template matching:

4.1. Feature Point Matching

Scale invariant feature transform (SIFT) [36] and speeded-up robust features (SURF) descriptor [37] are commonly used to detect common features in images and RANSAC [38] is used to match these features and calculate the homography. The images are then blended together to form a higher resolution image.

4.2. Template Matching

There are many objects where registration by the detection of feature points is unsuitable. Figure 3 shows an example of such objects. If an interest point algorithm was used to find a match for the areas marked by the yellow rectangle it would not be robust due to the lack of sufficient features in these areas.
Although interest points may not be suitable for these objects, a method called template matching can be used. The most basic form of template matching for translation is sliding the smaller template image over the larger image, and subtracting the pixel intensity of the template image from the larger image at each pixel position and obtaining the sum of the absolute or squared error of the pixel intensity differences. Where the difference is lowest, the best match is found. If multiple matches are required, then a threshold can be set and any match below this threshold is deemed a match.
A printed circuit assembly (PCA) is typically rich in colours with the printed circuit board (PCB) background, switches, connectors, resistors, capacitors and integrated circuits. Using a hue, saturation and value exploits the variety of colour.
An experiment was carried out to determine the difference between template matching using the components of the HSV colour space. A template was selected, slid over the master image and the loss function plotted as colour maps. The loss functions were the absolute difference in hues, saturations, values and a combination of all three. As per Figure 4, the hue matching results in a wider funnel at the best match position (global minimum) but have more local minimums. The saturation and value (Lightness or grayscale) result in a much narrower global minimum funnel but less local minimums. A combination of all three HSV components results in a narrow global minimum and the least local minimums.
Given a master image size of width 640 and height of 360 and a template of width 105 pixels and height 180 pixels, the number of absolute difference operations is 96,300 and takes 3.54 s on an Intel i5 quad core processor running at 1.6 GHz with 8 GByte RAM. This could be sped up by using different scales e.g., scaling the template and master image down and finding the global minimum, repeating this process at different scales up to 1:1 but only searching in the area of the previous global minimum.

4.3. Accelerating Image Template Matching Using PSO

In order to speed up the search compared to the exhaustive search described previously, a PSO was used with N P = 50 ; c 1 = 0.9 ; c 2 linearly adjusted from 0.1 to a maximum of 5; w = 0.9 ; velocity components clamping within [ 0.3 , 0.3 ] ; I max = 5000 . These values were empirically found.
To achieve our goal with the PSO method, a loss function to be minimised is defined as the absolute difference of the error when subtracting the template image from the image predicted by each particle. The loss can be the hue, saturation, value (Lightness) or all three. Note that the lightness value was used in this investigation. Particles encode an ( x , y ) point whose boundaries are the image (template) size.
A method was implemented to allow for more exploration at the start of the search and exploitation or convergence at the end of the test by varying the social coefficient c 2 . When c 2 is selected, the search starts out with a setting of 0.1 until the search has searched half of the search cycles. After that c 2 is linearly increased from 0.1 to 5.0 for the second half of the search cycle. Results for the variable and fixed c 2 method are reported in Table 4.
We used the PSO-optimised image stitching approach for the test cases in the following paragraphs.

4.3.1. Experiment 1

The object in Figure 3 was used to evaluate the PSO search algorithm for the application of image stitching. The loss over the search space is shown in Figure 5. Lightness and HSV have distinct minimums and a funnel-like structure around the minimum making them ideal for a search algorithm. As lightness required one third of the computation, it was chosen as the matching method.
The PSO was run 10 times.The average speed of the PSO search, 365 ms, is significantly faster than the previous method, which requires 2.85 s to achieve very similar performances. This means that the PSO approach does not deteriorate the performance with a 87% decrease in the search time—see Table 4.

4.3.2. Experiment 2

The PCB selected in Figure 6 was used to evaluate the PSO search algorithm for the application of image stitching. The search space with the loss is shown in Figure 7. All search spaces have a very small point indicating the matching position.

4.3.3. Results

Full search template matching using hue only search time was 2.25 s. In order to obtain accurate matching using PSO search a large number of particles (500) was needed to ensure the area with the lowest loss was explored. Similarly to Experiment 1, minor errors occurred were C2 was fixed but none where C2 was varied during the search. The number of cycles was set to 15 and the search time 1.25 s resulting in a 45% speed increase in run time.

4.4. 2D Point Cloud Matching

4.4.1. Method

There are many objects in the medical device industry where high-resolution images are required for quality assurance. Some of these objects have little or no features that can be used as interest point. At the same time, they have areas that are identical, which eliminates the template matching method. So both the described methods are not applicable. For such objects interest points can be added by use of a random point laser.
For this, a 2D point cloud matching application (Figure 8) was developed using synthetic point data.
In total, 100 points were generated with the X and Y coordinates uniformly randomly generated between l i = 0 and u i = 300 (Point Cloud 1). A second set of points were generated by mutating the first set with programmable random dropout where the point is replaced with a new random point and programmable point mutation where a random noise is added to the point position (Point Cloud 2). In addition to the mutations each point can be translated ± 20 positions in X and Y and rotated +/−20 degrees. An additional 50 points are added to point cloud 2 outside the 0 to 300 area.
A PSO algorithm with the settings N P = 50 , c 1 = 2 , c 2 = 0.3 ; w = 0.9 , velocity components clamping within 30 % of the search space with u i l i = 300 and I max = 1000 was developed. Each particle has a predicted X , Y offset and rotation angle. The loss function computes the predicted dataset by offsetting each point in point cloud 1 by its X , Y and rotating it by the rotation angle. For each point in the predicted dataset the distance to its nearest point in point cloud 2 is calculated. The sum of these distances squared is used as the loss function to be minimised.

4.4.2. Results

The PSO matching time for a 2D point cloud matching is approximately 5 s and the matching function was 100% accurate on every run with maximum dropout (25%) and noise (10%).

5. Colour Reproduction Accuracy

The colour reproduction accuracy is very important in digital microscopes as the colour sensitivity of the sensor seldom reproduces the colour space required by monitors [39]. In 1996, HP and Microsoft developed a standard colour space (sRGB) for use in monitors, printers and the internet [40]. In order to render the image on a display as accurately as possible these RGB pixel values need to be mapped to the device-independent colour space sRGB.
Figure 9 and Figure 10 and demonstrates the essence of the challenge. A reference colour chart (Figure 9) is placed under the microscope and an image (Figure 10) taken with no colour correction applied. The measured colours are not an accurate representation of the reference colour chart and look washed out/lack vibrancy.
In the digital microscope, colour correction is part of the simplified image processing pipeline shown in Figure 11.
The image processing pipeline works as follows:
  • Light from the object under inspection is focused on the image sensor using a single or combination of lenses. Infrared light is blocked using an infrared filter.
  • The image sensor converts the incoming light to a Bayer image.
  • The Bayer image is converted into a red, green and blue (RGB) image.
  • The RGB image is white balanced.
  • The RGB colour pixel measurements that are dependent on the sensor, optics and lighting characteristics do not accurately represent the actual object RGB colours and are corrected by a colour correction matrix.
  • Gamma correction is applied to match the gamma of the display.
There are many methods of mapping from device dependent RGB to the sRGB space, such as look up tables, linear and polynomial regression and neural networks [41]; however, a simple 3 × 3 matrix linearly transforming the device dependent RGB to the sRGB space “is not easily challenged” [41]. In addition, the application of this mapping is conducted in real time and performed in an FPGA, ruling out alternative methods due to speed and resource usage.
In order to find a 3 × 3 colour correction matrix, a number of known sRGB colours are measured by the sensor and a matrix is derived that maps these measured values to the known sRGB colours. A reference colour checker chart [42] was imaged by the camera. A region of interest (Figure 12) within each colour was selected and the mean RGB values of that region of interest calculated.
A table of colour regions vs. reference sRGB values [43] provides the ground truth for each colour in the chart.
The reference sRGB values are non-linear, with an exponent Gamma of approximately 2.2 applied to their normalised values [40]. The measured RGB values also have a Gamma of 2.2 applied to their normalised values within the FPGA. To convert these into linear space RGBL where a linear 3 × 3 matrix can be used to transform the device-dependent RGB values to device-independent sRGB values, each RGB value is divided by 255 and a Gamma power of 2.2 is applied.
R G B L = ( R G B / 255 ) 2.2
Table 5 shows the measured, reference sRGB values and the 3 × 3 colour correction matrix.
A 3 × 3 matrix needs to be found that transforms the measured colours to the reference colours as accurately as possible.
The most common method of correcting colour is the linear method using a 3 × 3 matrix to transform the measured colour to the reference colour [44]. Let R and M denote 3 × N matrices of reference colours and measured colours. N is the number of RGB samples. A 3 × 3 matrix C needs to be found that minimises:
min C = | | M C R | |
An inverse matrix can be used to solve MC = R. Add the inverse of M to each side of the equation
M 1 M C = M 1 R
As any matrix multiplied by its inverse matrix is the identity matrix I.
I C = M 1 R
The matrix C is not changed when multiplied by an identity matrix.
C = M 1 R
However, as the matrix M and R are not square a Moore–Penrose pseudo-inverse matrix method [44,45] is commonly used to find the matrix C. Using the openCV [9] function “inv(DECOMP_SVD)” to find such a matrix (called a colour correction matrix (CCM) hereafter) resulted in the values shown in Table 6.
A standard PSO search method was used with the exception of ‘Outlier Rejection’. Outlier rejection was implemented for each particle by sorting the losses for each colour and eliminating the top losses. The number of outliers can be selected from 0 to 5. The PSO settings used are shown in Table 7.
Loss function: The normalised and gamma corrected measured sRGB values M were multiplied by particles colour correction matrix C. The resulting corrected colours RGB values were subtracted from the normalised and gamma corrected reference values R. The sum of the square of the absolute differences were used as the loss calculation (Equation (10)).
L o s s = | | M C R | | 2
The PSO algorithm found a colour correction matrix (Table 8) virtually identical to the pseudo-inverse matrix method when no outliers were allowed.
The total colour reproduction error was calculated which is made up of the sum of the absolute errors of each of the sRGB corrected values compared to the reference sRGB values. Pseudo-inverse matrix produced a consistent error of 310. The PSO algorithm produced errors from 308 to 310. Allowing a single outlier in the PSO algorithm reduced the error to 274, an 11.3% decrease in the colour reproduction error. Figure 13 shows colour examples (using the pseudo-inverse matrix) in groups of three. The top of each group is the uncorrected colour, the middle is the required colour and the bottom is the corrected colour. Three examples are highlighted by a yellow border where the colour matching have noticeably improved with the PSO algorithm.
The PSO algorithm found a colour correction that gave an equal or better result than the pseudo-inverse matrix 100 times out of 100 times it was run. The number of cycles to perform a search varied between 320 and 680.

6. Image Distortion Correction

In the digital microscope used a simplified image distortion correction pipeline is shown in Figure 14.
Digital microscopes are commonly used to measure objects being inspected. Typically, the system is calibrated by using a reference ruler, which is placed under the microscope and a reference measurement is made. From that reference measurement, a pixel to mm ratio is determined and this is used to make further measurements.
This method relies on accurate spatial reproduction of the image; however, optics can introduce lens distortion errors due to non-linear magnification of lenses and the alignment of the sensor with the optics. In addition, the optics with the object being inspected can introduce linear affine and non-linear perspective errors. Figure 15 shows an example of a rectangular grid image that has perspective and lens distortions applied. As the two blue lines are the exact same length, it is evident that significant measurement errors occur if a fixed pixel to mm ratio is used to make measurements.
In theory, a pinhole camera produces a geometric perfect image of an object; however, in real optical systems lens manufactures have to balance many aberrations resulting in radial image distortion [46]. Radial distortions appear as barrel (Figure 16) or pincushion (Figure 17) distortions.
Radial distortion can be modelled using a polynomial equation. For each pixel (x,y) in the distorted image, the correct position of where that pixel should be is given in Equations (17) and (18). R is the radius and r is the normalised distance of a pixel from the optical centre for a 1920 × 1080 image and k 1 to k n are the polynomial coefficients.
The optical centre of the camera may not be the centre of the image due to sensor and lens misalignment. In Figure 18 the red grid is an undistorted grid. The blue grid has barrel distortion and decentring in the x-plane. This is because decentring P1 and P2 are not equidistant from the image centre. If the image centre is used to calculate r in Equations (11)–(14), significant errors would be introduced; therefore, the optical centre position needs to be found.
d x = x p o i n t x o p t i c a l c e n t r e
d y = y p o i n t y o p t i c a l c e n t r e
R = d x 2 + d y 2
r = R / 1100.1
x s c a l e = 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 . . .
y s c a l e = 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 . . .
x corrected = x optical - centre + ( x x optical - centre ) · scale
y corrected = y optical - centre + ( y y optical - centre ) · scale

6.1. Affine and Perspective Distortion

Affine (linear) and perspective (non-linear) distortions are shown in Figure 19, including a 3 × 3 matrix that can be used to carry out that transformation. The new location (x′, y′) of each pixel is attained by multiplying the matrix by the current location of each pixel. Equation (19) depicts a pixel rotation operation. A 1 is added to the (x, y) and (x′, y′) pixel locations to allow matrix multiplication.
x y 1 = c o s θ s i n θ 0 s i n θ c o s θ 0 0 0 1 * x y 1

Homography Matrix

All of these matrix operations can be combined into a single 3 × 3 matrix known as a homography matrix Equation (20). This single matrix with 8 degrees of freedom can fix all the following distortions:
  • Rotation.
  • Shearing.
  • Translation.
  • Scaling.
  • Perspective.
x y 1 = h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33 * x y 1

6.2. Problem Definition

Figure 20 demonstrates the essence of the challenge. A reference grid chart is placed under the microscope and an image taken with no image distortion correction applied.
A method of correcting Figure 20 (left) to produce a corrected image (Figure 20 (right)) is required.

6.3. The Method

A known reference grid is syntactically generated or imaged by the microscope and the distorted image points are detected and their position saved.
A combination of radial lens distortion correction with optical centre compensation and a homography matrix that map the detected points to the known point positions is searched for. A flowchart of the search is depicted in Figure 21.
A standard PSO search method was used with the settings reported in Table 9.
Loss function: The square of the differences of the corrected detected points and the reference points were used as the loss calculation Equation (21):
L o s s = i = 1 n ( x i , y i ) p r e d i c t e d ( x i , y i ) r e a l 2

6.3.1. Particle Initial Values

The range of initialisation values for the homography matrix ( h r o w , c o l ), polynomial constants ( k 1 to k 4 ) and the optical centre ( O x , O y ) are given in Table 10.

6.3.2. Velocity Limits

The velocity limits are clamped at 20% of the ranges given in Table 10.

6.3.3. Bounds Checking

No bounds checking of the position of the particles is performed as any solution that provides a good transform between distorted points and corrected points can be implemented.

6.4. Results

6.4.1. Synthetic Data Results

A synthetic chart, shown in Figure 22, with 19 × 19 dots, is generated using random perspective, user variable lens distortion 0.3 and optical centre (4,−5).
The PSO algorithm and CMA-ES [35] algorithm were used to find solutions to map the distorted points to undistorted points.
Both CMA-ES and PSO algorithms give similar results of max pixel error from 0.22 to 0.28 and average pixel error from 0.09 to 0.08 .

6.4.2. Real Data Results

A calibration chart, shown in Figure 23, with 19 × 19 dots was manually placed under the camera and positioned in the approximate centre of the field of view. The dots were detected with a blob detector and their centre recorded as per Figure 24.
The PSO algorithm and CMA-ES [35] algorithm were used to find solutions to map the distorted points to undistorted points. A library was used to implement the CMA-ES function [47]. The results are shown in Table 11 and Table 12, respectively.
In total, 12 images were taken with a reference ruler in different positions and points 10 mm apart were selected manually. Figure 25 shows these images and the reference 10 mm measurements.
Referring to Table 13, the points at the start and end of each 10 mm measurement are shown, with P 1 x , P 1 y being one point and P 2 x , P 2 y being the second point. The real number of pixels between the points is shown in the R pix column.
Each point is transformed using the solutions found and the predicted number of pixels between the transformed points is given in the P pix column. The average of the predicted number of pixels is divided by 12 to give a pixel to mm ratio. This ratio is multiplied by the predicted number of pixels to give a predicted length shown in column P len . The real length, column R len is calculated the same way.
The resolution in the horizontal direction is 10 mm ÷ 1920 = 5.2 um and in the vertical direction is 10 mm ÷ 1080 = 9.26 um. When no image distortion correction is used, a maximum measurement error of 247 um (Table 13) is found. Using the best CMA-ES solution, the maximum error is 8 um.
The PSO Search was run 10 times and the maximum error logged for each run. Results are shown in Table 14. This gives a standard deviation σ of 2.24, a mean of 8.4 and a variance of 5.04.

7. Conclusions

We have shown some software components of a digital inspection microscope for testing medical devices. The underlying theme is the use of PSO as an algorithm in many different tasks. We have shown that such a heuristic algorithm provides good results for all of them.
The first task sees the use of the PSO algorithm to improve upon the original calibration system for the height from focus. Originally this was performed with a poor measurement method, resulting in noisy data and required shipping the microscope back to factory. In the new approach, the PSO is used to optimally fit a polynomial to these noisy data automatically, thus creating a smoother calibrating model. This method has been shown to significantly increase the accuracy of the system. It is important to take sufficient samples for this method to work. When trying to replace PSO with more complex algorithms, i.e., the CMA-ES, we noted only a marginal difference in the results, which is not significant enough to use this computationally more expensive algorithm. The method used shows the viability of calibrating a system for this and other measurements prior to having optimised measurement methods, and releasing those systems to customers and correcting the noisy calibration data once the measurements are optimised.
The second important task is the so-called matching of images for image stitching. Here, we improve a conventional template matching algorithm using PSO. Results show that the newly proposed approach is most accurate when the object has a low variance in features as this results in an ideal search space. When there are many features, the search space results in a very small area. If no particle explores this area, then the matching fails; however, when there are many features, standard interest point matching can be used. The PSO matching time takes approximately 365 ms and the matching function was 100% accurate on every run of experiments 1 and 2 in Section 4.3 with maximum dropout (25%) and noise (10%). With PSO-supported 2D point cloud matching, we introduce a method to deal with cases where neither feature point matching nor template matching can be applied. This works by adding laser points to the scene and shows a lot of promise. Using PSO provides a robust solution and creating interest points using distance to other points results in the potential for real-time matching.
Thirdly, the problem of colour correction is tackled by using a PSO algorithm with no outlier rejection. This produces a colour correction matrix as good as, or slightly better than, the standard pseudo-inverse matrix method. Adding outlier rejection decreased the error by 11.3%, visibly improving the colour reproduction. Hence, the PSO algorithm is reliable in this application and beats conventional algorithms.
Fourthly, image distortions are corrected through PSO and CMA-ES. Here, the CMA-ES search seems to be more consistent across multiple runs than the PSO search, which also gave satisfactory results. Indeed, the PSO search for a solution to image distortion correction has been shown to increase measurement accuracy from 247 μm to average accuracy of 8.4 μm. As this calibration is performed once per system, a number of runs could be made and the lowest error homography matrix and polynomial constants picked, reaching the quality of CMA-ES.
Overall, there are many potential uses of PSO in this application. We have demonstrated four of them. In all cases, the PSO-based algorithm can either reach the quality of the current best approach or beat it. In some cases, it also offers a speed advantage. In this way, we have demonstrated the applicability of PSO in an industrial setting.

Author Contributions

Conceptualization, L.R., F.C., S.K. and S.C.-D.; methodology, L.R.; software, L.R.; validation, L.R.; formal analysis, L.R.; investigation, L.R.; resources, L.R.; data curation, L.R.; writing—original draft preparation, L.R., F.C. and S.K.; writing—review and editing, L.R., F.C., S.K. and S.C.-D.; visualization, L.R.; supervision, F.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank Ash Technologies Ltd. for providing the inspection system and relevant related information.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Eiben, A.E.; Smith, J.E. Introduction to Evolutionary Computation; Springer: Berlin, Germany, 2003. [Google Scholar]
  2. Caraffini, F.; Santucci, V.; Milani, A. (Eds.) Evolutionary Computation & Swarm Intelligence; MDPI: Basel, Switzerland, 2020. [Google Scholar]
  3. Miettinen, K. (Ed.) Evolutionary Algorithms in Engineering and Computer Science: Recent Advances in Genetic Algorithms, Evolution Strategies, Evolutionary Programming, GE; John Wiley & Sons, Inc.: New York, NY, USA, 1999. [Google Scholar]
  4. Oduguwa, V.; Tiwari, A.; Roy, R. Evolutionary computing in manufacturing industry: An overview of recent applications. Appl. Soft Comput. 2005, 5, 281–299. [Google Scholar] [CrossRef]
  5. Kumar, K.; Zindani, D.; Davim, J.P. Optimizing Engineering Problems through Heuristic Techniques; CRC Press: Boca Raton, FL, USA, 2019. [Google Scholar]
  6. Caraffini, F.; Chiclana, F.; Moodley, R.; Gongora, M.; Gupta, D.; Kose, U.; Castillo, O.; Al-Turjman, F. Applications of computational intelligence-based systems for societal enhancement. Int. J. Intell. Syst. 2022, 37, 2679–2682. [Google Scholar] [CrossRef]
  7. Wolpert, D.H.; Macready, W.G. No Free Lunch Theorems for Optimization. Trans. Evol. Comp 1997, 1, 67–82. [Google Scholar] [CrossRef] [Green Version]
  8. Ash Technologies. Omni-3, Digital Microscope and Measurement System. Available online: https://www.ashvision.com/omni-3/ (accessed on 1 July 2022).
  9. Bradski, G. The OpenCV Library. Dr. Dobb’s J. Softw. Tools Prof. Program. 2000, 25, 120–125. [Google Scholar]
  10. The Qt Company. Qt, Cross-Platform Software Development for Embedded & Desktop. Available online: https://www.qt.io/ (accessed on 6 August 2020).
  11. Catalucci, S.; Thompson, A.; Piano, S.; Branson, D.T.; Leach, R. Optical metrology for digital manufacturing: A review. Int. J. Adv. Manuf. Technol. 2022, 120, 4271–4290. [Google Scholar] [CrossRef]
  12. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar] [CrossRef]
  13. Okwu, M.O.; Tartibu, L.K. Particle Swarm Optimisation. In Metaheuristic Optimization: Nature-Inspired Algorithms Swarm and Computational Intelligence, Theory and Applications; Springer International Publishing: Cham, Switzerland, 2021; pp. 5–13. [Google Scholar] [CrossRef]
  14. Shi, Y.; Eberhart, R. A modified particle swarm optimizer. In Proceedings of the 1998 IEEE International Conference on Evolutionary Computation Proceedings. IEEE World Congress on Computational Intelligence, Anchorage, AK, USA, 4–9 May 1998; pp. 69–73. [Google Scholar] [CrossRef]
  15. Poli, R.; Kennedy, J.; Blackwell, T. Particle swarm optimization. Swarm Intell. 2007, 1, 33–57. [Google Scholar] [CrossRef]
  16. Zheng, Y.l.; Ma, L.h.; Zhang, L.y.; Qian, J.x. Empirical study of particle swarm optimizer with an increasing inertia weight. In Proceedings of the 2003 Congress on Evolutionary Computation, 2003. CEC ’03, Canberra, Australia, 8–12 December 2003; Volume 1, pp. 221–226. [Google Scholar]
  17. Bansal, J.C.; Singh, P.K.; Saraswat, M.; Verma, A.; Jadon, S.S.; Abraham, A. Inertia Weight strategies in Particle Swarm Optimization. In Proceedings of the 2011 Third World Congress on Nature and Biologically Inspired Computing, Salamanca, Spain, 19–21 October 2011; pp. 633–640. [Google Scholar] [CrossRef] [Green Version]
  18. Ratnaweera, A.; Halgamuge, S.; Watson, H. Particle swarm optimiser with time varying acceleration coefficients. In Proceedings of the International Conference on Soft Computing and Intelligent Systems, Tsukuba, Japan, 21–25 October 2002; pp. 240–255. [Google Scholar]
  19. Ratnaweera, A.; Halgamuge, S.K.; Watson, H.C. Self-Organizing Hierarchical Particle Swarm Optimizer With Time-Varying Acceleration Coefficients. IEEE Trans. Evol. Comput. 2004, 8, 240–255. [Google Scholar] [CrossRef]
  20. Tripathi, P.K.; Bandyopadhyay, S.; Pal, S.K. Multi-objective particle swarm optimization with time variant inertia and acceleration coefficients. Inf. Sci. 2007, 177, 5033–5049. [Google Scholar] [CrossRef] [Green Version]
  21. Engelbrecht, A. Particle swarm optimization: Velocity initialization. In Proceedings of the 2012 IEEE Congress on Evolutionary Computation, Brisbane, Australia, 10–15 June 2012; pp. 1–8. [Google Scholar] [CrossRef] [Green Version]
  22. Shahzad, F.; Masood, S.; Khan, N. Probabilistic opposition-based particle swarm optimization with velocity clamping. Knowl. Inf. Syst. 2014, 39, 703–737. [Google Scholar] [CrossRef]
  23. Clerc, M.; Kennedy, J. The particle swarm-explosion, stability, and convergence in a multidimensional complex space. IEEE Trans. Evol. Comput. 2002, 6, 58–73. [Google Scholar] [CrossRef] [Green Version]
  24. Xu, S.; Rahmat-Samii, Y. Boundary Conditions in Particle Swarm Optimization Revisited. IEEE Trans. Antennas Propag. 2007, 55, 760–765. [Google Scholar] [CrossRef]
  25. Kulkarni, M.N.K.; Patekar, M.S.; Bhoskar, M.T.; Kulkarni, M.O.; Kakandikar, G.; Nandedkar, V. Particle Swarm Optimization Applications to Mechanical Engineering—A Review. Mater. Today Proc. 2015, 2, 2631–2639. [Google Scholar] [CrossRef]
  26. Li, G.; Li, Y.; Chen, H.; Deng, W. Fractional-Order Controller for Course-Keeping of Underactuated Surface Vessels Based on Frequency Domain Specification and Improved Particle Swarm Optimization Algorithm. Appl. Sci. 2022, 12, 3139. [Google Scholar] [CrossRef]
  27. Eberhart; Shi, Y. Particle swarm optimization: Developments, applications and resources. In Proceedings of the 2001 Congress on Evolutionary Computation (IEEE Cat. No.01TH8546), Seoul, Korea, 27–30 May 2001; Volume 1, pp. 81–86. [Google Scholar] [CrossRef]
  28. Gad, A.G. Particle Swarm Optimization Algorithm and Its Applications: A Systematic Review. Arch. Comput. Methods Eng. 2022, 29, 2531–2561. [Google Scholar] [CrossRef]
  29. Ma, K.; Ren, C.; Zhang, Y.; Chen, Y.; Chen, Y.; Zhou, P. A New Vibration-Absorbing Wheel Structure with Time-Delay Feedback Control for Reducing Vehicle Vibration. Appl. Sci. 2022, 12, 3157. [Google Scholar] [CrossRef]
  30. Guo, Y.; Li, W.; Mileham, A.; Owen, G. Applications of particle swarm optimisation in integrated process planning and scheduling. Robot. Comput.-Integr. Manuf. 2009, 25, 280–288. [Google Scholar] [CrossRef]
  31. Freeman, M.; Freeman, M.; Hull, C.; Charman, W. Optics; Butterworth-Heinemann: Oxford, UK, 2003; p. 92. [Google Scholar]
  32. Murawski, K. Method of Measuring the Distance to an Object Based on One Shot Obtained from a Motionless Camera with a Fixed-Focus Lens. Acta Phys. Pol. A 2015, 127, 1591–1596. [Google Scholar] [CrossRef]
  33. Zhang, X.; Liu, Z.; Jiang, M.; Chang, M. Fast and accurate auto-focusing algorithm based on the combination of depth from focus and improved depth from defocus. Opt. Express 2014, 22, 31237. [Google Scholar] [CrossRef]
  34. Laakso, T.I.; Tarczynski, A.; Paul Murphy, N.; Välimäki, V. Polynomial filtering approach to reconstruction and noise reduction of nonuniformly sampled signals. Signal Process. 2000, 80, 567–575. [Google Scholar] [CrossRef]
  35. Hansen, N.; Ostermeier, A. Adapting arbitrary normal mutation distributions in evolution strategies: The covariance matrix adaptation. In Proceedings of the IEEE International Conference on Evolutionary Computation, Nagoya, Japan, 20–22 May 1996; pp. 312–317. [Google Scholar]
  36. Lowe, D.G. Object recognition from local scale-invariant features. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; Volume 2, pp. 1150–1157. [Google Scholar] [CrossRef]
  37. Bay, H.; Tuytelaars, T.; Van Gool, L. SURF: Speeded Up Robust Features. In Proceedings of the Computer Vision—ECCV 2006, Graz, Austria, 7–13 May 2006; pp. 404–417. [Google Scholar]
  38. Fischler, M.A.; Bolles, R.C. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  39. Bianco, S.; Bruna, A.R.; Naccari, F.; Schettini, R. Color correction pipeline optimization for digital cameras. J. Electron. Imaging 2013, 22, 023014. [Google Scholar] [CrossRef]
  40. Anderson, M.; Motta, R.; Chandrasekar, S.; Stokes, M. Proposal for a standard default color space for the internet-srgb. In Proceedings of the 4th Color Imaging Conference. Society for Imaging Science and Technology, Scottsdale, AZ, USA, 19–22 November 1996; pp. 238–245. [Google Scholar]
  41. Finlayson, G.D.; Mackiewicz, M.; Hurlbert, A. Color correction using root-polynomial regression. IEEE Trans. Image Process. 2015, 24, 1460–1470. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  42. X Rite. ColorChecker Classic: X-Rite Photo & Video. Available online: https://xritephoto.com/colorchecker-classic (accessed on 1 October 2020).
  43. X Rite. Color Management Products, Tools, Solutions: X-Rite Photo & Video. Available online: https://xritephoto.com/ph_product_overview.aspx?ID=820 (accessed on 1 April 2019).
  44. Fang, F.; Gong, H.; Mackiewicz, M.; Finlayson, G. Colour Correction Toolbox. Available online: https://ueaeprints.uea.ac.uk/id/eprint/65098/4/Colour_Correction_Toolbox.pdf (accessed on 15 June 2022).
  45. Dresden, A. The fourteenth western meeting of the American Mathematical Society. Bull. Amer. Math. Soc. 1920, 26, 385–396. [Google Scholar] [CrossRef]
  46. Mallon, J.; Whelan, P. Precise radial un-distortion of images. In Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004, Cambridge, UK, 26 August 2004; Volume 1, pp. 18–21. [Google Scholar] [CrossRef] [Green Version]
  47. Fabisch, A. CMA-ESpp. Available online: https://github.com/AlexanderFabisch/CMA-ESpp (accessed on 6 August 2020).
Figure 1. Camera calibration jig.
Figure 1. Camera calibration jig.
Applsci 12 07827 g001
Figure 2. Example for the image stitching problem, where the two yellow areas represent the overlap to be found.
Figure 2. Example for the image stitching problem, where the two yellow areas represent the overlap to be found.
Applsci 12 07827 g002
Figure 3. Example object not suitable for interest point matching.
Figure 3. Example object not suitable for interest point matching.
Applsci 12 07827 g003
Figure 4. Loss in HSV for image template matching (right) over an example image (top) to a section of the image (yellow box). Blue indicates no loss and dark red indicates the maximum loss.
Figure 4. Loss in HSV for image template matching (right) over an example image (top) to a section of the image (yellow box). Blue indicates no loss and dark red indicates the maximum loss.
Applsci 12 07827 g004
Figure 5. Loss in HSV over the image used for experiment 1. Blue indicates no loss and dark red the maximum loss: (a) hue; (b) saturation; (c) lightness; (d) HSV.
Figure 5. Loss in HSV over the image used for experiment 1. Blue indicates no loss and dark red the maximum loss: (a) hue; (b) saturation; (c) lightness; (d) HSV.
Applsci 12 07827 g005
Figure 6. The yellow area is matched to the overall image in experiment 2.
Figure 6. The yellow area is matched to the overall image in experiment 2.
Applsci 12 07827 g006
Figure 7. Loss in HSV over the image used for experiment 2. Blue indicates no loss and dark red the maximum loss: (a) hue; (b) saturation; (c) lightness; (d) HSV.
Figure 7. Loss in HSV over the image used for experiment 2. Blue indicates no loss and dark red the maximum loss: (a) hue; (b) saturation; (c) lightness; (d) HSV.
Applsci 12 07827 g007
Figure 8. The 2D point cloud matching application.
Figure 8. The 2D point cloud matching application.
Applsci 12 07827 g008
Figure 9. Macbeth chart.
Figure 9. Macbeth chart.
Applsci 12 07827 g009
Figure 10. Measured colours.
Figure 10. Measured colours.
Applsci 12 07827 g010
Figure 11. Microscope image processing pipeline.
Figure 11. Microscope image processing pipeline.
Applsci 12 07827 g011
Figure 12. X-rite colour checker chart measured by the camera showing the region of interest areas.
Figure 12. X-rite colour checker chart measured by the camera showing the region of interest areas.
Applsci 12 07827 g012
Figure 13. Examples of colour comparisons using pseudo-inverse matrix.
Figure 13. Examples of colour comparisons using pseudo-inverse matrix.
Applsci 12 07827 g013
Figure 14. Microscope image distortion processing pipeline.
Figure 14. Microscope image distortion processing pipeline.
Applsci 12 07827 g014
Figure 15. Image distortion errors.
Figure 15. Image distortion errors.
Applsci 12 07827 g015
Figure 16. Barrel distortion.
Figure 16. Barrel distortion.
Applsci 12 07827 g016
Figure 17. Pincushion distortion.
Figure 17. Pincushion distortion.
Applsci 12 07827 g017
Figure 18. Optical decentring.
Figure 18. Optical decentring.
Applsci 12 07827 g018
Figure 19. Affine and perspective distortions.
Figure 19. Affine and perspective distortions.
Applsci 12 07827 g019
Figure 20. Distortion correction.
Figure 20. Distortion correction.
Applsci 12 07827 g020
Figure 21. Distortion correction flowchart.
Figure 21. Distortion correction flowchart.
Applsci 12 07827 g021
Figure 22. Generated synthetic image example.
Figure 22. Generated synthetic image example.
Applsci 12 07827 g022
Figure 23. Cal chart.
Figure 23. Cal chart.
Applsci 12 07827 g023
Figure 24. Detected points.
Figure 24. Detected points.
Applsci 12 07827 g024
Figure 25. Twelve sample images, including a reference ruler, used for evaluating the image distortion algorithms.
Figure 25. Twelve sample images, including a reference ruler, used for evaluating the image distortion algorithms.
Applsci 12 07827 g025
Table 1. Finding S Z for best polynomial match. Selected values are shown in boldface.
Table 1. Finding S Z for best polynomial match. Selected values are shown in boldface.
SZ2550751001502002505001000
Max Step Error Average1.330.480.460.430.390.330.220.220.22
Table 2. Validation results on five calibration processes—reported in steps.
Table 2. Validation results on five calibration processes—reported in steps.
ground truth3.387.077.655.975.98
3rd order polynomial0.340.450.430.510.7
4th order polynomial0.240.310.390.320.53
5th order polynomial0.390.310.390.290.95
Table 3. Logged MSE values for PSO and CMAES.
Table 3. Logged MSE values for PSO and CMAES.
PSO271.3401.28355.3374.7319.2
CMAES270.8398.7354.3373.4318.7
Table 4. PSO results for experiment 1 obtained with both fixed value variable values for the c 2 parameter. The ground truth is (258, 112).
Table 4. PSO results for experiment 1 obtained with both fixed value variable values for the c 2 parameter. The ground truth is (258, 112).
RunFixed c 2 Variable c 2 Time (ms)
1(257, 112)(258, 112)350
2(258, 112)(258, 112)374
3(258, 112)(258, 112)356
4(258, 112)(258, 112)355
5(258, 112)(258, 112)364
6(258, 112)(258, 112)365
7(258, 112)(258, 112)365
8(258, 113)(258, 112)365
9(258, 112)(258, 112)365
10(258, 112)(258, 112)365
Table 5. Measured and reference sRGB values.
Table 5. Measured and reference sRGB values.
Measured ValuesCCMReference Values
RGB RGB
0.1410.0980.078 ??? 0.1730.0820.055
0.4820.3490.294×???=0.5480.3110.227
0.1880.2310.326 ??? 0.1220.1980.344
0.1220.1370.090 0.0940.1510.053
0.2470.2550.384 0.2390.2200.448
0.3220.4940.478 0.1360.5170.410
0.5020.2430.098 0.6800.2120.021
0.1180.1530.329 0.0780.1040.389
0.3760.1570.141 0.5420.1010.125
0.0900.0710.122 0.1110.0410.151
0.4000.4550.208 0.3440.5110.048
0.5800.3690.137 0.7520.3740.023
0.0630.0860.227 0.0360.0430.311
0.1610.2590.153 0.0580.3020.064
0.2550.0860.063 0.4370.0330.041
0.5650.4670.180 0.8050.5800.010
0.3610.1840.271 0.5050.0920.307
0.1490.2750.396 0.0000.2390.364
0.9840.9730.957 0.8990.8990.891
0.6430.6470.651 0.5860.5860.586
0.3880.3920.396 0.3590.3590.359
0.2200.2200.224 0.1980.1980.194
0.0940.0980.102 0.0890.0890.089
0.0390.0390.039 0.0300.0300.030
Table 6. CCM calculated using pseudo-inverse.
Table 6. CCM calculated using pseudo-inverse.
1.855−0.750−0.174
−0.2521.567−0.382
0.079−0.6491.471
Table 7. PSO settings for the distortion correction task.
Table 7. PSO settings for the distortion correction task.
ParameterValue
NP50
Velocity Limits−0.3 to 0.3
Particles Initial ValuesRandom number ( −2.5, 2.5 )
c 1 0.90
c 2 0.50
W0.9
Outliers0/1
Stopping CriteriaStop if the loss is not changing for 100 cycles
Table 8. CCM found using PSO.
Table 8. CCM found using PSO.
1.856−0.751−0.174
−0.2521.567−0.382
0.078−0.6391.470
Table 9. PSO settings for the distortion correction task. Ranges per component of this search space D are reported in Table 10.
Table 9. PSO settings for the distortion correction task. Ranges per component of this search space D are reported in Table 10.
ParameterValue
Particles50
Velocity Limitsclamped at 20% of D
C12.00
C20.2
W0.9
Stopping CriteriaStop after 25,000 cycles
Table 10. Search space for the distortion correction task.
Table 10. Search space for the distortion correction task.
ComponentLower BoundUpper Bound
h 11 0.91.1
h 12 −0.30.3
h 13 −20+20
h 21 −0.30.3
h 22 0.91.1
h 23 −20+20
h 31 −0.000005+0.000005
h 32 −0.000005+0.000005
h 33 0.951.05
K 1 −0.05+0.05
K 2 −0.0005+0.0005
K 3 −0.00005+0.00005
K 4 −0.000005+0.000005
O x −5+5
O y −5+5
Table 11. PSO real results.
Table 11. PSO real results.
RunLossMax Pix ErrorAvg Pix Error
1186.71.550.62
2187.61.670.62
3187.21.640.62
4209.91.870.66
5212.91.880.67
6250.82.040.72
7209.41.830.66
8229.12.020.69
9187.91.700.62
10224.61.910.68
Table 12. CMA-ES real results.
Table 12. CMA-ES real results.
RunLossMax Pix ErrorAvg Pix Error
1168.21.670.59
2168.31.670.59
3167.81.670.59
4167.81.670.59
5168.21.670.59
6168.31.670.59
7168.01.670.59
8168.81.670.59
9167.81.670.59
10167.81.670.59
Table 13. Resultsonrealimage using CMA-ES and PSO. The first column (#) reports the measurement number (see Figure 25) while raw data are reported in the second column as reference.
Table 13. Resultsonrealimage using CMA-ES and PSO. The first column (#) reports the measurement number (see Figure 25) while raw data are reported in the second column as reference.
#Raw DataCMA-ESPSO
P1 x P1 y P2 x P2 y R pix P pix R len P len P pix R len P len
15506311395633845.0857.59.85010.008856.39.8509.994
25521441402148850.0857.39.90910.006857.09.90910.002
3551103414071039856.0856.19.9799.992856.39.9799.995
43909711141562855.2856.69.9699.998857.59.96910.009
57924511537870854.7856.79.9649.999857.49.96410.007
6619254948877.1857.310.22410.006877.110.22410.007
7500954495102852.0856.99.93210.002857.29.93210.005
892294092696844.0857.39.83910.007856.49.8399.996
913009611294111850.0857.19.90910.004856.89.90910.001
101834939183260879.0856.210.2479.993856.210.2479.993
118632661726266863.0856.410.0609.995856.510.0609.997
128888331756844868.1856.210.1199.993856.210.1199.994
Table 14. PSO real results over 10 searches.
Table 14. PSO real results over 10 searches.
RunMax Pix Error (um)
17
27
38
49
56
612
76
813
98
108
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ryan, L.; Kuhn, S.; Colreavy-Donnely, S.; Caraffini, F. Particle Swarm Optimisation in Practice: Multiple Applications in a Digital Microscope System. Appl. Sci. 2022, 12, 7827. https://doi.org/10.3390/app12157827

AMA Style

Ryan L, Kuhn S, Colreavy-Donnely S, Caraffini F. Particle Swarm Optimisation in Practice: Multiple Applications in a Digital Microscope System. Applied Sciences. 2022; 12(15):7827. https://doi.org/10.3390/app12157827

Chicago/Turabian Style

Ryan, Louis, Stefan Kuhn, Simon Colreavy-Donnely, and Fabio Caraffini. 2022. "Particle Swarm Optimisation in Practice: Multiple Applications in a Digital Microscope System" Applied Sciences 12, no. 15: 7827. https://doi.org/10.3390/app12157827

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop