Abstract
In spite of being introduced over twenty-five years ago, Fonseca and Fleming’s attainment surfaces have not been widely used. This article investigates some of the shortcomings that may have led to the lack of adoption of this performance measure. The quantitative measure based on attainment surfaces, introduced by Knowles and Corne, is analyzed. The analysis shows that the results obtained by the Knowles and Corne approach are influenced (biased) by the shape of the attainment surface. Improvements to the Knowles and Corne approach for bi-objective Pareto-optimal front (POF) comparisons are proposed. Furthermore, assuming M objective functions, an M-dimensional attainment-surface-based quantitative measure, named the porcupine measure, is proposed for comparing the performance of multi-objective optimization algorithms. A computationally optimized version of the porcupine measure is presented and empirically analyzed.
1. Introduction
First introduced by Fonseca and Fleming [1], attainment surfaces provide researchers in multi-objective optimization with a means to accurately visualize the region dominated by a Pareto-optimal front (POF). In many studies, approximated Pareto optimal fronts (POFs) are shown by joining the non-dominated solutions using a curve. Fonseca and Fleming reasoned that it is not correct to use a curve to join these non-dominated solutions. The use of a curve creates a false impression that intermediate solutions exist between any two non-dominated solutions. In reality, there is no guarantee that any intermediate solutions exist. Fonseca and Fleming suggested that, instead of a curve, the non-dominated solutions can be used to create an envelope that separates the dominated and non-dominated spaces. The envelope formed by the non-dominated solutions is referred to as an attainment surface.
Despite being proposed in 1995, attainment surfaces have not seen wide use in the comparison of multi-objective algorithms (MOAs). Instead, the well-known hypervolume [2,3], inverted generational distance [4,5] and its improvements [6], and spread [7] measures are frequently used to quantify and to compare the quality of approximated POFs. This study provides an analysis of the shortcomings of attainment surfaces as a multi-objective performance measure. Specifically, the attainment-surface-based measure proposed by Knowles and Corne [8] is analyzed. Improvements to Knowles and Corne’s approach for bi-objective optimization problems are developed and analyzed in this paper. Additionally, an M-dimensional (where M is the number of objectives) attainment-surface-based quantitative measure, named the porcupine measure, is proposed and analyzed.
The porcupine measure provides a way to quantify the ratio of the Pareto front when one algorithm performs statistically significantly better than another algorithm. The objective of this paper is to introduce this new attainment-surface-based measure and to illustrate its applicability. For this purpose, the measure is applied to compare the performance of arbitrary selected MOAs on a set of multi-objective optimization benchmark problems. Note that the focus is not on an extensive comparison of multi-objective algorithms but rather on validating the use of the porcupine measure as a statistically sound mechanism to compare MOAs.
The remainder of this paper is organized as follows. Section 2 introduces multi-objective optimization along with the definitions used throughout this paper. Section 3 presents the background and related work. Next, 2-dimensional attainment surfaces are introduced in Section 4, followed by a weighted approach to produce attainment surfaces in Section 5. The generalization to M dimensions is provided in Section 6. Finally, the conclusions are given in Section 7.
2. Definitions
Without loss of generality, assuming minimization, a multi-objective optimization problem (MOP) with M objectives is of the form
with , for all , and where is the feasible space as determined by constraints; n is the dimension of the search space, and M is the number of objective functions.
The following definitions are used throughout this paper.
Definition 1.
(Domination): A decision vector dominates a decision vector (denoted by ) if and only if and such that .
Definition 2.
(Pareto optimal): A decision vector is said to be Pareto optimal if no decision vector exists such that .
Definition 3.
(Pareto-optimal set): A set , where , is referred to as the Pareto-optimal solutions (POS).
Definition 4.
(Approximated Pareto-optimal front): A set , where , is referred to as an approximation for the true POF.
Definition 5.
(Nadir objective vector): A vector that represents the upper bound of each objective in the entire POF is referred to as a nadir point.
3. Background and Related Work
Fonseca and Fleming [1] suggested that the non-dominated solutions that make up the approximated POF be used to construct an attainment surface. The attainment surface’s envelope is defined as the boundary in the objective space that separates those points that are dominated by, or equal to, at least one of the non-dominated solutions that make up the approximated POF from those points for which no non-dominated solution dominates or equals. Figure 1 depicts an attainment surface and the corresponding approximated POF.
Figure 1.
Example Pareto-optimal front and attainment surface. (a) An approximated Pareto-optimal front. (b) Attainment surface.
The attainment surface envelope is identical to the envelope used during the calculation of the hypervolume metric [2,3]. In contrast to the hypervolume calculation, in the case of an attainment surface, the envelope is not used directly in the calculation of a performance metric. Instead, the attainment surface can be used to visually compare algorithms’ performance by plotting the attainment surfaces for both algorithms.
For stochastic algorithms, variations in the performance over multiple runs (also referred to as samples) are expected. Fonseca and Fleming [1] described a procedure to generate an attainment surface that represents a given algorithm’s performance over multiple independent runs. The attainment surface for multiple independent runs is computed by first determining the attainment surface for each run’s approximated POF. Next, a number of random imaginary lines is chosen, pointing in the direction of improvement for all the objectives. For each line, the points of intersection with each of the lines and the attainment surfaces are calculated. Figure 2a,b depict three attainment surfaces with intersection lines and intersection points.
Figure 2.
Attainment surfaces. (a) Example attainment surfaces with intersection lines. (b) Example attainment surfaces with unequally spread intersection lines. (c) Grand attainment surface.
For each line, the intersection points can be seen as a sample distribution that is uni-dimensional and can thus be strictly ordered. By calculating the median for each of the sample distributions, the objective vectors that are likely to be attained in exactly of the runs can be identified. The envelope formed by the median points is known as the grand attainment surface. Similar to how the median is used to construct the grand attainment surface, the lower and upper quantiles (25th and 75th percentiles) are used to construct the and grand attainment surfaces.
The sample distribution approach can also be used to compare performance between algorithms. In order to compare two algorithms, two sample distributions—one for each of the algorithms—are calculated per intersection line. Standard non-parametric statistical test procedures can then be used to determine if there is a statistically significant difference between the two sample distributions. Using the statistical test results, a combined grand attainment surface, as depicted in Figure 2c, can be constructed, showing the regions where each of the algorithms outperforms the other. Fonseca and Fleming [1] suggested that suitable test procedures include the median test, its extensions to other quantiles, and tests of the Kolmogorov–Smirnov type [9].
Knowles and Corne [8] extended the work carried out by Fonseca and Fleming and used attainment surfaces to quantify the performance of their Pareto archives evolution strategy (PAES) algorithm. Knowles and Corne identified four variables in the approach proposed by Fonseca and Fleming, namely:
- How many comparison lines should be used;
- Where the comparison lines should go;
- Which statistical test should be used to compare the univariate distribution;
- In what form should the results be presented.
From their empirical analysis, Knowles and Corne found that at least 1000 lines should be used. In order to generate the intersection lines, the minimum and maximum values for each objective over the non-dominated solutions were found. The objective values were then normalized according to the minimum and maximum values into the range . Intersection lines were then generated as equally spread lines from the origin rotated from to , effectively rotating 90°, covering the complete approximated POF.
For M-dimensional problems, where the number of objectives is , Knowles and Corne suggested using a grid-based approach where points are spread equally on the M, -dimensional hyperplanes. Each hyperplane corresponds to an objective value fixed at the value . The intersection lines are drawn from the origin to these equally distributed points. In the case of 3-dimensional problems, a grid would result in 108 () points and, thus, 108 intersection lines. Similarly, using a grid on a 3-dimensional problem would result in 768 intersection lines, and so forth.
For statistical significance testing, Knowles and Corne used the Mann–Whitney U test [9] with a significance level of .
Finally, Knowles and Corne found that a convenient way to report the comparison results was to use simple value pairs , hereafter referred to as the Knowles-Corne measure (KC), where a gives the percentage of space for which algorithm A was found to be statistically superior to algorithm B, and b gives the percentage where algorithm B was found to be statistically superior to algorithm A. It can be noted that gives the percentage where neither algorithm was found to be statistically superior to the other.
Knowles and Corne [8] generalized the definition of the comparison to compare more than two algorithms. For K algorithms, the above comparison is carried out for all algorithm pairs. For each algorithm, k, two percentages are reported: , which is the region where algorithm k was not worse than any other algorithm, and , which is the region where algorithm k performed better than all the other algorithms. Note that because the region described by is contained in the region described by .
Knowles and Corne [10] found that visualization of attainment surfaces in three dimensions is difficult due to the intersection lines not being evenly spread. As an alternative, Knowles presented an algorithm inspired by the work conducted by Smith et al. [11] to visually draw summary attainment surfaces using axis-aligned lines. The algorithm was found to be particularly well suited for drawing 3-dimensional attainment surfaces.
Fonseca et al. [12] continued work on attainment surfaces by introducing the empirical attainment function (EAF). The EAF is a mean-like, first-order moment measure of the solutions found by a multi-objective optimiser. The EAF allows for intuitive visual comparisons between bi-objective optimization algorithms by plotting the solution probabilities as a heat map [13]. Fonseca et al. [14] studied the use of the second-order EAF, which allows for the pairwise relationship between random Pareto-set approximations to be studied.
It should be noted that calculation of the EAF for three or more dimensions is not trivial [15]. Efficient algorithms to calculate the EAF for two and three dimensions have been proposed in [15]. Tušar and Filipič [16] developed approaches to visualize the EAFs in two and three dimensions.
4. Regarding 2-Dimensional Attainment Surfaces
The attainment surface calculation approach developed by Fonseca and Fleming [1] did not describe in detail how the intersection lines should be generated. Instead, it was only stated that a random number of intersection lines, each pointing in the direction of improvement for all the objectives, should be used. This approach worked well to construct a visualization of the attainment surface.
When Knowles and Corne [8] extended the intersection line approach to develop a quantitative comparison measure, they needed the lines to be equally distributed. If the lines were not equally distributed, as depicted in Figure 2b, certain regions of the attainment surface would contribute more than others, leading to misleading results.
Figure 3 depicts two example attainment surfaces with rotation-based intersection lines. Figure 3a depicts a concave attainment surface. Visually, the rotation-based intersection lines look to be equally distributed. Figure 3b, however, depicts a convex attainment surface. Visually, the length of the attainment surface between the intersection lines is larger in the regions closer to the objective axis than in the middle regions. Clearly, the rotation-based intersection lines are not equally spaced for convex-shaped fronts when comparing the length of the attainment surface represented by each intersection line.
Figure 3.
Attainment surfaces with rotation-based intersection lines. (a) Concave POF. (b) Convex POF.
In order to address the unequal spacing of the rotation-based intersection lines, a new approach to placing the intersection lines is proposed in this paper. To compensate for the shape of the front, the intersection lines can be generated either inwardly or outwardly positioned on a line, running from the extreme values of the attainment surface, based on the shape of the attainment surfaces being compared. Figure 4 depicts the inward and outward intersection line approaches for a convex shaped front. The regions are clearly more equally spread for the inward intersection line approach.
Figure 4.
Attainment surfaces with outward/inward intersection lines. (a) Inward. (b) Outward.
However, the direction of the intersection lines is less desirable for comparison purposes. At the edges, the intersection lines are parallel with the opposite objective’s axis. Intuitively, it is more desirable that the intersection lines should be parallel to the closest objective’s axis. Another disadvantage of the inward and outward approaches is that the approach to be selected depends on the shape of the front, which is typically unknown. For attainment surfaces that are not fully convex or concave, neither approach is suitable.
An alternative approach, referred to as attainment-surface-shaped intersection lines (ASSIL) in this paper, is to generate the intersection lines along the shape of the attainment surface. In order to equally spread the intersection lines, the Manhattan distance is used to calculate equal spacings for the intersection lines along the attainment surface. Figure 5 depicts the Manhattan distance calculation between two points on the approximated POF, which is and in this case. ASSIL can be generated using Algorithm 1.
| Algorithm 1 Attainment-surface-shaped intersection line (ASSIL) generation. |
|
Figure 5.
Attainment surface with Manhattan distance calculations.
Intersection lines are spaced equally along the attainment surface. The intersection lines are rotated incrementally such that the intersection lines at the ends of the attainment surface are parallel to the objective axis.
Figure 6 depicts the attainment-surface-shaped intersection line approach. The generation of the intersection lines along the shape of the attainment surface allows for an equal spacing of the intersection lines independent of the shape of the front. For all shapes that the attainment surface can assume, whether convex, concave, or mixed, the intersection lines are equally spread out.
Figure 6.
Attainment surfaces with unbiased ASSIL. (a) Convex POF. (b) Concave POF.
The KC measure is calculated as shown in Algorithm 2.
| Algorithm 2 Algorithm for the calculation of the KC measure. |
|
An evaluation of the rotation-based and random intersection line approaches is presented using six artificially generated POF test cases based on those used by Knowles and Corne [8]. Figure 7 depicts the six artificially generated POF test cases. Each of these artificially generated POF test cases was tested using six pof shape geometries, namely concave, convex, linear, mixed, and disconnected geometries. Figure 8 depicts the five POF shape geometries.
Figure 7.
Test case Pareto-optimal fronts. Dots represent algorithm A, and triangles represent algorithm B. (a) Case 1. (b) Case 2. (c) Case 3. (d) Case 4. (e) Case 5. (f) Case 6.
Figure 8.
Test case Pareto-optimal front geometries. (a) Concave. (b) Convex. (c) Linear. (d) Mixed. (e) Disconnected.
Table 1 summarises the true KC, the KC with rotation-based and random intersection lines, and the KC with ASSIL results. Values in red indicate attainment surfaces obtained that outperformed the control method (i.e., the true KC measure) by more than 5%, while values in blue indicate attainment surfaces that were found that were 5% worse than the control method. For each of the approaches, 1000 intersection lines were used for the calculation.
Table 1.
Comparison of the results of KC measure with ASSIL; blue indicates performance of 5% worse than the competing algorithm, and red indicates performance of 5% better than the competing algorithm.
As expected, the ASSIL generation approach produced results much closer to the true KC measure: the closer the POFs being compared are to the true POF, the more accurate the comparison using the ASSIL generation approach becomes.
Table 2 and Table 3 present a comparison of the varying results obtained from using the various intersection line generation approaches. Results comparing the vector evaluated particle swarm optimization (VEPSO) [17], optimized multi-objective particle swarm optimization (OMOPSO) [18], and speed-constrained multi-objective particle swarm optimization (SMPSO) [19] algorithms using the Zitzler-Deb-Thiele (ZDT) [20] and Walking Fish Group (WFG) [21] test sets are presented. The choice of algorithms was arbitrary and only for illustrative purposes. Results were obtained over 30 independent runs. For more details on the algorithms and parameters used, the interested reader is referred to [22]. The characteristics of the problems are summarized in Table 4.
Table 2.
Intersection line comparison between VEPSO (), SMPSO (), and OMOPSO (); blue indicates performance of 5% worse than the competing algorithm, and red indicates performance of 5% better than the competing algorithm.
Table 3.
Intersection line comparison between VEPSO (), SMPSO (), and OMOPSO (); blue indicatses performance of 5% worse than the competing algorithm, and red indicates performance of 5% better than the competing algorithm.
Table 4.
Properties of the ZDT and WFG problems.
Variations in the results between the different intersection line generation approaches can be seen. ZDT1, ZDT3, ZDT4, and ZDT6 all show variations in the results greater than for each of the comparisons. WFG1, WFG2, WFG5, WFG6, and WFG9 all show variations in the results greater than for at least one of the comparisons. The variations are indicative of the bias towards certain attainment surface shapes shown by the various intersection line generation approaches.
5. Weighted 2-Dimensional Attainment-Surface-Shaped Intersection Lines
As an alternative to the equally spread intersection lines used by ASSIL, intersection lines can be generated along the shape of the POF, with at least one intersection line per attainment surface line segment. Because the attainment surface segments are not all of equal lengths, a weight is associated with each intersection line to balance the KC measure result. The weighted attainment-surface-shaped intersection lines (WASSIL) generation algorithm is given in Algorithm 3.
Figure 9 depicts a convex POF with WASSIL-generated intersection lines. The figure clearly shows that the intersection lines are positioned along the attainment surface, and due to the positioning, the lines are angled slightly differently from the intersection lines in Figure 3b. The WASSIL algorithm should, for the test cases, result in a weighted KC measure result that matches the true KC measure result.
Figure 9.
Convex POF and attainment surface with WASSILs.
Note that the weighted KC measure is calculated as shown in Algorithm 4.
Table 5 summarises the true KC measure, the KC measure with rotation-based and random intersection lines, the KC measure with ASSIL, and the KC measure with WASSIL results. For each of the approaches, 1000 intersection lines were used for the calculation.
| Algorithm 3 Weighted attainment-surface-shaped intersection line (WASSIL) generation. |
|
Table 5.
Comparison of the results of the KC measure with WASSIL; blue indicates performance of 5% worse than the competing algorithm, and red indicates performance of 5% better than the competing algorithm.
For POF test cases 1 through 3, only 2 of the 15 measurements using the random intersection line generation approach had a deviation from the true KC of less than . Overall, of the measurements using the random intersection line generation approach had a deviation greater than . This confirms that the random intersection line generation approach is not well suited for the KC calculation.
The rotation-based intersection line generation approach presented by Knowles and Corne fared better than the random intersection line generation approach. Only 7 of the 30 measurements using the rotation-based intersection line generation approach had a deviation greater than . Case 1 with a convex POF fared worse with a deviation of almost for both algorithms. Four of the five case 2 measurements using the rotation-based intersection line generation approach had a deviation greater than . For the remaining case 2 measurement, a deviation of at least is noted. The results indicate the rotation-based intersection line generation approach outperformed the random intersection line generation approach with respect to accuracy. However, the results also indicate that the rotation-based intersection line generation approach is not well suited for the KC calculation and that the results vary based on the POF shape and the spread of the solutions.
As expected, the WASSIL generation approach produced results much closer to the true KC: the closer the approximated POFs being compared are to the true POF, the more accurate the comparison using the WASSIL generation approach becomes.
| Algorithm 4 Weighted KC measure algorithm |
|
6. -Dimensional Attainment Surfaces
For M-dimensional problems, Knowles and Corne [8] recommended that a grid-based intersection line generation approach, as explained in Section 3, be used. Similar to the rotational approach for 2-dimensional problems, the grid-based approach would lead to unbalanced intersection lines when measuring irregularly shaped POFs. Figure 10 shows an example of an irregularly shaped 3-dimensional attainment surface.
Figure 10.
3-dimensional attainment surface.
Section 6.1 discusses the challenges that need to be addressed in order to generate intersection lines for M-dimensional attainment surfaces. Section 6.2 and Section 6.3 introduce two algorithms to generate M-dimensional attainment surface intersection lines. The first uses a naive (and computationally expensive) approach that produces all possible intersection lines. The second is computationally more efficient, considering a minimal set of intersection lines. Section 6.4 presents a stability analysis of the two proposed algorithms to show that the computationally efficient approach performs similarly to the naive approach with respect to comparison accuracy.
6.1. Generalizing Attainment Surface Intersection Line Generation to M Dimensions
For 2-dimensional problems, the ASSIL approach generates equally spread intersection lines. Intuitively, generalization of the assil approach for M dimensions requires the calculation of equally spread points over the M-dimensional attainment surface.
The calculation of equally spread points requires that the surface is divided into equally sized -dimensional hypercubes. For the 3-dimensional case, this would require dividing the attainment surface into equally sized squares. The intersection vectors would be positioned from the middle of each square. The length of the edges would need to be set to the greatest common divider of the lengths of the edges that make up the attainment surface. Even for simple cases, this would lead to an excessive number of squares. The more squares, the higher the computational cost of the measure.
In order to lower the computational cost of the measure, the number of squares needs to be reduced. Because the square edge lengths are based on the greatest common divider, there is no way to reduce the number of squares as long as the squares must be equal in size. If the square sizes differ, the measure will be biased to areas with smaller squares. Areas with smaller squares will carry more weight in the calculation because there will be more of them.
In contrast to the ASSIL approach, the WASSIL approach does not require the intersection lines to be equally spread; instead, only a weight factor must be known for each intersection line. For the 3-dimensional case, the weight factor for each intersection line is calculated as the area of the squares that make up the attainment surface. The weight factor for each intersection line is calculated as the area of the square by multiplying the edge lengths. The weight factor also allows use of rectangles in the 3-dimensional case (hyper-rectangles in the M-dimensional case) instead of equally sized squares.
6.2. Porcupine Measure (Naive Implementation)
This section presents the naive implementation for the n-dimensional attainment-surface-based quantitative measurement named the porcupine measure. The naive implementation uses each of the n-dimensional values from each Pareto-optimal point to subdivide the attainment surface in each of the dimensions. Figure 11 depicts an example of the subdivision approach for each of the three dimensions, considering all intersection lines. Figure 12 depicts the attainment surface, in 3-dimensional space, with the subdivisions visible.
Figure 11.
Top, front, and side view of attainment surface with naive subdivisions. (a) Top. (b) Front. (c) Side.
Figure 12.
A 3-dimensional attainment surface with naive subdivisions.
In addition to the calculation of the hyper-rectangles, the center point and intersection vector need to be calculated. The naive implementation of the porcupine measure is summarized in Algorithm 5. For a more detailed algorithm listing, please refer to Appendix A.
Using the intersection lines generated by the above algorithm, two algorithms can now be compared using a nonparametric statistical test, such as the Mann–Whitney U test [9]. The porcupine measure is defined, similar to the weighted KC measure, as the weighted sum of the intersection lines where a statistically significant difference exists over the sum of all the weights (i.e., the percentage of the surface area of the attainment surface, as determined by the weights, where one algorithm statistically performs superior to another).
| Algorithm 5 Porcupine measure (naive implementation). |
|
Figure 13 depicts an attainment surface with subdivisions and intersection vectors generated using the naive approach. The porcupine measure’s name is derived from the fact that the intersection vectors resembles the spikes of a porcupine.
Figure 13.
A 3-dimensional attainment surface with naive subdivisions and intersection vectors.
6.3. Porcupine Measure (Optimized Implementation)
The large number of subdivisions that result from using the naive implementation of the porcupine measure creates a computationally complex problem when performing the statistical calculations required by the porcupine measure. To reduce the computational cost of the porcupine measure, the naive implementation can be optimized by subdividing the attainment surface only as necessary to accommodate the shape of the attainment surface. Figure 14 depicts an attainment surface with the subdivision lines (dashed) as generated by the optimized implementation.
Figure 14.
A 3-dimensional attainment surface with optimized subdivisions.
Note that the algorithm yields the minimum number of subdivisions such that the results are independent of the dimension ordering of the Pareto-optimal points. This is by design to allow for the reproducibility and increased stability of the results.
The optimized implementation of the porcupine measure is summarized in Algorithm 6. For a more detailed algorithm listing, please refer to Appendix B.
Similar to the naive implementation, the porcupine measure is defined as the weighted sum of the intersection lines where a statistically significant difference exists over the sum of all the weights (the percentage of the surface area of the attainment surface, as determined by the weights, where one algorithm statistically performs superior to another).
Figure 15 depicts an attainment surface with subdivisions and intersection vectors generated using the optimized implementation. As can be seen in the figure, the optimized implementation resulted in notably fewer subdivisions and intersection vectors. The lower number of intersection vectors considerably reduces the computational complexity of the measure.
Figure 15.
A 3-dimensional attainment surface with optimized subdivisions and intersection vectors.
6.4. Stability Analysis
In order to show that the optimized implementation provides results similar to the naive implementation, 30 independent runs of each measure were executed. Each measurement run calculated the porcupine measure using the approximated POFs as calculated by 30 independent runs of each of the algorithms being compared. A total of runs were executed for each algorithm being compared.
Table 6 lists the results for the algorithm pairs that were compared. The algorithm pairs are listed without a separator line between them. The results should thus be interpreted by looking at both lines of the comparison. For each algorithm, the mean, standard deviation (), minimum, and maximum for the naive and optimized implementations of the porcupine measure with a maximum side length of are listed.
Table 6.
Naive vs. optimized porcupine measure (3-objective WFG problem set).
The experimental results in [22] show that a maximum side length of for the optimized implementation yielded a good trade-off between accuracy of the results and performance when compared with the naive implementation.
Statistical testing was performed to determine if there were any statistically significant differences between the naive and optimized implementations’ results. The Mann–Whitney U test was used at a significance level of . The purpose of the statistical testing was to determine if there was any information loss due to using the optimized implementation compared to the naive implementation. The results indicated that for 52 out of the 54 measurements, or , there were no statistically significant differences. Only for two cases, namely the OMOPSO in the WFG3 OMOPSO vs. VEPSO comparison and the SMPSO in the WFG3 VEPSO vs. SMPSO comparison, was a statistically significant difference noted. In spite of the statistical difference, the ranking of the algorithms did not change. Based on the results, it can, therefore, be concluded that the optimized implementation yielded the same results as the naive implementation and no information was lost. Because no information was lost when using the optimized implementation, it can be concluded that the optimized implementation, with less computational complexity when compared to the naive implementation, can be used when conducting comparisons of multi-objective algorithms.
| Algorithm 6 Porcupine measure (optimized implementation). |
|
The mean standard deviation was , and the maximum standard deviation was . The conclusion that can be drawn from the data is that the optimized implementation of the porcupine measure is very robust because measurement values for each of the samples were close to the average.
For the experimentation that was carried out for this study, the runtime of the optimized implementation was notably faster than that of the naive implementation. A difference on the orders of a few magnitudes was noticeable.
The computational complexity of the naive implementation is directly proportional to the size of the sets. It should be noted that, for the tested algorithms with an approximated POF size of 50 points tested over 30 independent samples, the optimal POF had a typical size of 1250 points. The size of the sets were thus approximately 1250 points. For three dimensions, this resulted in a minimum computational complexity of at least , or rather 1,953,125,000 (almost two billion). The optimized implementation resulted in much-reduced computational complexity because only the necessary subdivisions were made. The size of the sets were much smaller. For the three-dimensional case, the maximum edge length leads to a minimum complexity of at least , or rather 1000 times lower than that of the naive version.
7. Conclusions
This article investigated shortcomings that may have led to the lack of adoption of attainment-surface-based quantitative performance measurements for multi-objective optimization algorithms. It was shown that the quantitative measure proposed by Knowles and Corne was biased against convex Pareto-optimal fronts (POFs) when using rotational intersection lines. The attainment-surface-shaped intersection lines (ASSIL) generation approach was proposed. The ASSIL generation approach was shown not to be biased against any attainment surface shape.
An algorithm for an M-dimensional attainment-surface-based quantitative measure, named the porcupine measure, was presented. Additionally, a computationally optimized implementation of the porcupine measure was introduced and analyzed. The results indicated that the optimized implementation performed as well as the naive implementation.
The porcupine measure allows for a quantitative comparison between M-dimensional approximated POFs through the use of attainment surfaces. The porcupine measure provides additional information on an algorithm’s performance when compared to another algorithm, which was previously not quantifiable. A thorough comparison comparing the state-of-the-art multi-objective optimization algorithms using the porcupine measure is left as future work.
Author Contributions
Conceptualization, C.S. and A.E.; methodology, C.S. and A.E.; formal analysis, C.S.; investigation, C.S.; resources, C.S.; data curation, C.S.; writing–original draft preparation, C.S.; writing–review and editing, A.E.; visualization, C.S.; supervision, A.E. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to further research being conducted on the data.
Conflicts of Interest
The authors declare no conflict of interest.
Appendix A. Naive Porcupine Measure Implementation
This appendix provides more detailed, step-by-step pseudocode implementation of the naive porcupine measure.
| Algorithm A1 Naive porcupine measure intersection line generation. |
| First, determine the optimal POF using the POFs for all the samples of all the algorithms being compared. Then, create sorted value sets containing the dimensional values of all the Pareto-optimal points. |
|
Appendix B. Optimized Porcupine Measure Implementation
This appendix provides a more detailed, step-by-step pseudocode implementation of the optimized, computationally more efficient, porcupine measure.
| Algorithm A2 Optimized Porcupine Measure Intersection Line Generation |
First, the optimal Pareto-optimal front and nadir vector, , are calculated.
Each point, , in the optimal POF is processed separately for each dimension .
|
The set is iteratively constructed as the set that contains all the minimum points that will influence the intersection vectors for hyper-rectangles that lie between the selected point and the maximum boundary point .
|
References
- Fonseca, C.M.; Fleming, P.J. On the Performance Assessment and Comparison of Stochastic Multiobjective Optimisers. In Parallel Problem Solving from Nature—PPSN IV; Springer: Berlin/Heidelberg, Germany, 1995; Volume 1141, pp. 584–593. [Google Scholar]
- Zitzler, E.; Thiele, L. Multiobjective Optimization Using Evolutionary Algorithms—A Comparative Case Study. In Parallel Problem Solving from Nature—PPSN V; Springer: Berlin/Heidelberg, Germany, 1998; pp. 292–301. [Google Scholar] [CrossRef]
- Van Veldhuizen, D.A. Multiobjective Evolutionary Algorithms: Classifications, Analyses and New Innovations. Ph.D. Thesis, Air Force Institute of Technology, Dayton, OH, USA, 1999. [Google Scholar] [CrossRef]
- Coello Coello, C.A.; Reyes-Sierra, M. A Study of the Parallelization of a Coevolutionary Multi-Objective Evolutionary Algorithm. In Proceedings of the Third Mexican International Conference on Artificial Intelligence, Mexico City, Mexico, 26–30 April 2004; pp. 688–697. [Google Scholar] [CrossRef]
- Reyes-Sierra, M.; Coello Coello, C.A. A New Multi-Objective Particle Swarm Optimizer with Improved Selection and Diversity Mechanisms; Technical report; Evolutionary Computation Group at CINVESTAV-IPN: Mexico City, México, 2004. [Google Scholar]
- Ishibuchi, H.; Masuda, H.; Tanigaki, Y.; Nojima, Y. An Analysis of Quality Indicators Using Approximated Optimal Distributions in a Three-dimensional Objective Space. In Proceedings of the International Conference on Evolutionary Multi-Criterion Optimization, Guimaraes, Portugal, 29 March–1 April 2015; pp. 110–125. [Google Scholar]
- Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A Fast and Elitist Multiobjective Genetic Algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
- Knowles, J.D.; Corne, D.W. Approximating the nondominated front using the Pareto Archived Evolution Strategy. Evol. Comput. 2000, 8, 149–172. [Google Scholar] [CrossRef] [PubMed]
- Gibbons, J.D.; Chakraborti, S. Nonparametric Statistical Inference, 5th ed.; Chapman and Hall: Strand, UK; CRC Press: Boca Raton, FL, USA, 2010; p. 630. [Google Scholar]
- Knowles, J.D. A summary-attainment-surface plotting method for visualizing the performance of stochastic multiobjective optimizers. In Proceedings of the 5th International Conference on Intelligent Systems Design and Applications 2005, ISDA ’05, Warsaw, Poland, 8–10 September 2005; Volume 2005, pp. 552–557. [Google Scholar] [CrossRef]
- Smith, K.I.; Everson, R.M.; Fieldsend, J.E. Dominance measures for multi-objective simulated annealing. In Proceedings of the IEEE Congress on Evolutionary Computation, Portland, OR, USA, 19–23 June 2004; Volume 1, pp. 23–30. [Google Scholar] [CrossRef]
- Da Fonseca, V.G.; Fonseca, C.M.; Hall, A.O. Inferential Performance Assessment of Stochastic Optimisers and the Attainment Function. In Proceedings of the Evolutionary Multi-Criterion Optimization, Zurich, Switzerland, 7–9 March 2001; Volume 1993, pp. 213–225. [Google Scholar] [CrossRef]
- López-Ibáñez, M.; Paquete, L.; Thomas, S. Exploratory Analysis of Stochastic Local Search Algorithms in Biobjective Optimization. In Experimental Methods for the Analysis of Optimization Algorithms; Bartz-Beielstein, T., Chiarandini, M., Paquete, L., Preuss, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; pp. 209–222. [Google Scholar] [CrossRef]
- Fonseca, C.M.; Da Fonseca, V.G.; Paquete, L. Exploring the Performance of Stochastic Multiobjective Optimisers with the Second-Order Attainment Function. In Proceedings of the Evolutionary Multi-Criterion Optimization, Guanajuato, Mexico, 9–11 March 2005; Volume 3410, pp. 250–264. [Google Scholar] [CrossRef]
- Fonseca, C.M.; Guerreiro, A.P.; López-Ibáñez, M.; Paquete, L. On the Computation of the Empirical Attainment Function. In Proceedings of the Evolutionary Multi-Criterion Optimization: 6th International Conference, EMO 2011, Ouro Preto, Brazil, 5–8 April 2011; pp. 106–120. [Google Scholar] [CrossRef]
- Tušar, T.; Filipič, B. Visualizing Exact and Approximated 3D Empirical Attainment Functions. Math. Probl. Eng. 2014, 2014, 569346. [Google Scholar] [CrossRef]
- Parsopoulos, K.E.; Vrahatis, M.N. Particle swarm optimization method in multiobjective problems. In Proceedings of the ACM Symposium on Applied Computing, Madrid, Spain, 10–14 March 2002; pp. 603–607. [Google Scholar] [CrossRef]
- Reyes-Sierra, M.; Coello Coello, C.A. Improving PSO-Based Multi-objective Optimization Using Crowding, Mutation and ϵ-Dominance. In Proceedings of the Third International Conference on Evolutionary Multi-Criterion Optimization, Guanajuato, Mexico, 9–11 March 2005; Volume 3410, pp. 505–519. [Google Scholar] [CrossRef]
- Nebro, A.J.; Durillo, J.J.; García-Nieto, J.; Coello Coello, C.A.; Luna, F.; Alba, E. SMPSO: A New PSO-based Metaheuristic for Multi-objective Optimization. In Proceedings of the IEEE Symposium on Multi-Criteria Decision-Making, Nashville, TN, USA, 30 March–2 April 2009; Volume 2, pp. 66–73. [Google Scholar] [CrossRef]
- Zitzler, E.; Deb, K.; Thiele, L. Comparison of Multiobjective Evolutionary Algorithms: Empirical Results. Evol. Comput. 2000, 8, 173–195. [Google Scholar] [CrossRef] [PubMed]
- Huband, S.; Barone, L.; While, L.; Hingston, P. A Scalable Multi-objective Test Problem Toolkit. In Proceedings of the Third International Conference on Evolutionary Multi-Criterion Optimization, Guanajuato, Mexico, 9–11 March 2005; pp. 280–295. [Google Scholar] [CrossRef]
- Scheepers, C. Multi-guided Particle Swarm Optimization: A Multi-Objective Particle Swarm Optimizer. Ph.D. Thesis, University of Pretoria, Pretoria, South Africa, 2018. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).