Apple Image Recognition Multi-Objective Method Based on the Adaptive Harmony Search Algorithm with Simulation and Creation

Aiming at the low recognition effect of apple images captured in a natural scene, and the problem that the OTSU algorithm has a single threshold, lack of adaptability, easily caused noise interference, and over-segmentation, an apple image recognition multi-objective method based on the adaptive harmony search algorithm with simulation and creation is proposed in this paper. The new adaptive harmony search algorithm with simulation and creation expands the search space to maintain the diversity of the solution and accelerates the convergence of the algorithm. In the search process, the harmony tone simulation operator is used to make each harmony tone evolve towards the optimal harmony individual direction to ensure the global search ability of the algorithm. Despite no improvement in the evolution, the harmony tone creation operator is used to make each harmony tone to stay away from the current optimal harmony individual for extending the search space to maintain the diversity of solutions. The adaptive factor of the harmony tone was used to restrain random searching of the two operators to accelerate the convergence ability of the algorithm. The multi-objective optimization recognition method transforms the apple image recognition problem collected in the natural scene into a multi-objective optimization problem, and uses the new adaptive harmony search algorithm with simulation and creation as the image threshold search strategy. The maximum class variance and maximum entropy are chosen as the objective functions of the multi-objective optimization problem. Compared with HS, HIS, GHS, and SGHS algorithms, the experimental results showed that the improved algorithm has higher a convergence speed and accuracy, and maintains optimal performance in high-dimensional, large-scale harmony memory. The proposed multi-objective optimization recognition method obtains a set of non-dominated threshold solution sets, which is more flexible than the OTSU algorithm in the opportunity of threshold selection. The selected threshold has better adaptive characteristics and has good image segmentation results.


Introduction
The harmony search (HS) algorithm [1] is a swarm intelligence algorithm that simulates the process of music players' repeated tuning of the pitch of each musical instrument through memory to achieve a wonderful harmony process.It has been successfully applied to many engineering problems.Like other swarm intelligence algorithms, the HS algorithm also has problems with premature convergence and stagnant convergence [2,3].In recent years, scholars have made many improvements to the optimization performance of the HS algorithm, such as the probabilistic idea of applying the distributed estimation algorithm in the [2] to design an adaptive adjustment strategy and improve the search ability of the algorithm.In [3], the HS algorithm and the genetic algorithm (GA) are combined to obtain a better initial solution based on multiple iterations of the genetic algorithm, and then the harmony search algorithm is used to further search for possible solutions in the adjacent region.Zhu et al. linearly combines the optimal harmony vector with the two harmony vectors randomly selected in the population to generate a new harmony and expand the local search area [4].Wu et al. uses the competitive elimination mechanism to update the harmony memory library to accelerate the process of survival of the fittest.Some studies have designed two key parameters of the HS algorithm to overcome the shortcomings of fixed parameters [5].These two parameters also have improved from different perspectives and proposed different improved harmony search algorithms in [6][7][8].The studies in [9,10] apply the improved HS algorithm to multi-objective optimization problems.
Relevant scholars have conducted a large number of basic studies on the HS algorithm, and proposed improvements, achieving good results in application.Simulated annealing and a genetic algorithm are used to optimally adjust the weights for the aggregation mechanism [11].An innovative tuning approach for fuzzy control systems (CSs) with a reduced parametric sensitivity using the grey wolf optimizer (GWO) algorithm was proposed in [12].The GWO algorithm is used in solving the optimization problems, where the objective functions include the output sensitivity functions [12].Echo state networks (ESN) are a special form of recurrent neural networks (RNNs), which allow for the black box modeling of nonlinear dynamical systems.The harmony search (HS) algorithm shows good performance, proposed for training the echo state network in an online manner [13].The design of the Takagi-Sugeno fuzzy controllers in state feedback formed using swarm intelligence optimization algorithms was proposed in [14], and the particle swarm optimization, simulated annealing and gravitational search algorithms are adopted.In terms of image applications of HS, the following studies have been conducted.A novel approach to visual tracking called the harmony filter is presented.The harmony filter models the target as a color histogram and searches for the best estimated target location using the Bhattacharyya coefficient as a fitness metric [15].In [16], a new dynamic clustering algorithm based on the hybridization of harmony search (HS) and fuzzy c-means to automatically segment MRI brain images in an intelligent manner was presented.Improvements to Harmony Search (HS) in circle detection was proposed that enables the algorithm to robustly find multiple circles in larger data sets and still work on realistic images that are heavily corrupted by noisy edges [17].A new approach to estimate the vanishing point using a harmony search (HS) algorithm was proposed in [18].In the theoretical background of HS, the following studies have been conducted.In order to consider the realistic design situations, a novel derivative for discrete design variables based on a harmony search algorithm was proposed in [19].In [20], it presents a simple mathematical analysis of the explorative search behavior of harmony search (HS) algorithm.The comparison of the final designs of conventional methods and metaheuristic based structural optimization methods was discussed in [21].In the improvement of adaptive methods of HS, the following studies have been conducted.An adaptive harmony search algorithm was proposed for solving structural optimization problems to incorporate a new approach for adjusting these parameters automatically during the search for the most efficient optimization process [22].A novel technique to eliminate tedious and experience-requiring parameter assigning efforts was proposed for the harmony search algorithm in [23].Pan et al. presents a self-adaptive global best harmony search (SGHS) algorithm for solving continuous optimization problems [24].
Recognition of apple images is one of the most basic aspects of computer processing apple grading [25].Apple image recognition refers to the separation of apple fruit from branches, leaves, soil, and sky, i.e., image segmentation techniques [26].The threshold segmentation method is an extremely important and widely used image segmentation method [27].Among them, the maximum class variance (OTSU) [28] and maximum entropy method [29] are the two most commonly used threshold segmentation methods.Both types of segmentation methods have different degrees of defects [30].The classical OTSU algorithm has the problem that the ideal segmentation effect cannot be obtained for complex images of natural scenes [30].Although the OTSU algorithm solves the problem of threshold selection, it has the problems of lacking adaptability, easily causes noise interference and over-segmentation, and requires a great deal of time for computing, and so on.Therefore, the classical OTSU algorithm still needs further improvement.In order to comprehensively utilize the advantages of the two segmentation methods, a multi-objective optimization problem is introduced.In recent years, the HS algorithm has made some progress in solving multi-objective optimization problems, such as the stand-alone scheduling problem [10], the wireless sensor network routing problem [31], the urban one-way traffic organization optimization problem [32], and the distribution network model problem [33].In [34], the study employed a phenomenon-mimicking algorithm and harmony search to perform the bi-objective trade-off.The harmony search algorithm explored only a small amount of total solution space in order to solve this combinatorial optimization problem.A multi-objective optimization model based on the harmony search algorithm (HS) was presented to minimize the life cycle cost (LCC) and carbon dioxide equivalent (CO 2 -eq) emissions of the buildings [35].A novel fuzzy-based velocity reliability index using fuzzy theory and harmony search was proposed, which is to be maximized while the design cost is simultaneously minimized [36].These studies are closely integrated with specific application issues and cannot be easily extended to other application areas.
This paper proposes an apple image recognition multi-objective method based on the adaptive harmony search algorithm with simulation and creation (ARMM-AHS).This method transforms the image segmentation problem of natural scenes into a multi-objective optimization problem.It takes the maximum class variance and maximum entropy as the two objective functions of the optimization problem.A new adaptive harmony search algorithm with simulation and creation (SC-AHS) is proposed as a strategy for the image threshold search to achieve natural scene apple image segmentation.The results showed that the SC-AHS algorithm improves the global search ability and the diversity of the solution of the HS algorithm, and has obvious performance advantages of fast convergence.The ARMM-AHS method can effectively segment the apple image.

Threshold Segmentation Principle of the OTSU Algorithm
The OTSU algorithm is a global threshold selection method.The basic idea is to use the grayscale histogram of the image to dynamically determine the segmentation threshold of the image with the maximum variance of the target and the background.
Suppose that the gray level of an apple image is L the gray value of each pixel is 0, 1, • • • , L − 1, the number of pixels with gray value i is W i , the total number of pixels in the image is W = ∑ L−1 i=0 W i , and the probability of gray value i is set to p i = W i W .Let the threshold t divide the image into two classes C 0 and C 1 (target and background), that is, C 0 and C 1 respectively correspond to pixels with gray levels {0, 1, • • • , t} and {t + 1, t + 2, • • • , L − 1}, then the variance between the two classes is as shown in Equation (1): The optimal threshold t * should maximize the variance between classes, which is an optimization problem, as shown in Equation (2): Among them, ω 0 and ω 1 respectively represent the probability of the occurrence of class C 0 and class C 1 .µ 0 and µ 1 represent the average gray value of the class C 0 and class C 1 , respectively.µ T represents the average gray level of the entire image.Therefore, the larger the value of σ 2 B , the better the threshold that is selected.

Maximum Entropy Segmentation Principle
For the same image, let us assume that the threshold t divides the image into two classes C 0 and C 1 .The probability distribution of the class C 0 is P t , and the probability distribution of the class C 1 is 1−P t , where P t = ∑ t i=0 p i , 0 ≤ t ≤ L − 1.The entropy formula is as shown in Equation ( 3): The optimal threshold should maximize the entropy, which is also an optimization problem, as shown in Equation (4).

Adaptive Harmony Search Algorithm with Simulation and Creation
The inertial particle and fuzzy reasoning were introduced into the bird swarm optimization algorithm (BSA) in [37] to make those birds that are foraging jump out of the local optimal solution to enhance the ability of global optimization.The fast shuffled frog leaping algorithm (FSFLA) proposed in [38] makes each frog individual evolve toward the best individual, which accelerates the individual evolutionary speed and improves the search speed of the algorithm.According to the no-free lunch theorem [39], this paper fuses the ideas of the above two different types of optimization algorithms' mechanisms and proposes an adaptive harmony search algorithm with simulation and creation (SC-AHS) to improve the HS algorithm optimization performance., and global worst harmony x w i = x w 1 , x w 2 , • • • , x w N in each creation of the harmony memory.The adaptive factor of the harmony tone ξ i is defined as follows, and it is used to suppress randomness during simulation and creation of harmony tone:

Harmony Tone Simulation Operator
Definition 2 (Harmony tone simulation operator).In one harmony evolution, each tone component x j i , i = 1, 2, • • • , N of each harmony x j , j = 1, 2, • • • , HMS in the harmony memory adaptively evolves through the harmony tone simulation operator, as follows, to adjust the harmony individuals to evolve to optimal harmony individuals: Information 2018, 9, 180 5 of 24

Harmony Tone Creation Operator
Definition 3 (Harmony tone creation operator).In one harmony evolution, each tone component x j i , i = 1, 2, • • • , N of each harmony x j , j = 1, 2, • • • , HMS in the harmony memory adaptively evolves through the harmony tone creation operator, as follows, to adjust harmony individuals away from the current optimal harmony individual evolution:

Adaptive Evolution Theorem with Simulation and Creation
Theorem 1 (Adaptive evolution theorem with simulation and creation).In one harmony evolution, each tone component x j , j = 1, 2, • • • , HMS in each harmony x j i , i = 1, 2, • • • , N in the harmony memory adaptively evolves through the harmony tone simulation operator according to the Equation (6).If f x j,new < f x j,old , replace x j,old i with x j,new i , respectively.Otherwise, it evolves through the harmony tone creation operator according to the Equation (7).If f x j,new ≥ f x j,old still exists, use Equation ( 8) to randomly generate the harmony tone: x j,new i

SC-AHS Algorithm Process
Algorithm 1. SC-AHS Input: the maximum number of evolutions T max , the size of the harmony memory HMS, the number of tone component N.
Output: global optimal harmony x g .
Step1: Randomly generate initial harmony individuals which number is HMS, marked as The harmony memory is initialized according to Equation (1).The fitness of x j is f x j .
Step2: Each tone component Sort each harmony by fitness f x j in descending order to determine x g i , x now i and x w i .For (x j , j = 1, 2, • • • , HMS) Begin Calculated ξ i according to Equation (5) For (x according to Equation (6) to perform harmony tone simulation End if ( f x j,new < f x j,old ) For (x

End else
For (x according to Equation (7) to perform harmony tone creation End If ( f x j,new < f x j,old )

Efficiency Analysis of SC-AHS
The efficiency analysis of the algorithm shows that the time complexity of the HS algorithm is O(T max × N), and the space complexity is O(HMS × N), while the time complexity of the SC-AHS algorithm is O(T max × N × HMS), and the space complexity is O(HMS × N).Since T max HMS, the execution time and storage space of the SC-AHS algorithm and the HS algorithm are of the same order of magnitude.This shows that the SC-AHS algorithm does not increase the computational overhead and space.

Mathematical Problem Description of the Multi-Objective Optimization Recognition Method
Suppose that the gray level of an apple image is L, the gray value of each pixel is 0, 1, • • • , L − 1.There are two classes to be distinguished in the apple image, and the corresponding harmonic threshold in each class is written as x j , j = 1, 2, • • • , HMS, 0 ≤ x j ≤ L − 1.The optimization process of seeking thresholds for segmentation of apple images can be regarded as a multi-objective optimization problem, in which the maximum class variance and maximum entropy are taken as two objective functions of the multi-objective optimization problem.The proposed SC-AHS algorithm is used as an image threshold search strategy.If we take the calculation of the minimization function as an example, the multi-objective optimization problem can be defined as follows:

Process of the Apple Image Recognition Multi-Objective Method Based on an Adaptive Harmony Search Algorithm with Simulation and Creation
Step 1: Read and preprocess the apple image.Count the gray level L of the image and the number of corresponding pixels W i to calculate the probability p i of the occurrence of grayscale.
Step 2: Enter the maximum number of evolutions T max , the size of the harmony memory bank HMS, and the number of tone components N.
Step 2.1: HMS initial harmony thresholds are generated randomly, which is denoted as

Each harmonic threshold is binary coded and
Information 2018, 9, 180 7 of 24 calculated f 1 (X) and f 2 (X) according to Equation ( 9) to initialize the harmony memory.The fitness of the harmonic threshold x j is f 1 x j and f 2 x j .
Step 2.2: Construct a Pareto optimal solution set, and note the harmonic threshold deleted from the optimal solution set as x w,old .
Step 2.3: Each tone component x w,old i , i = 1, 2, • • • , N of the harmony threshold x w,old is updated according to the SC-AHS algorithm step.
Step 2.4: Determine whether the number of evolutions has been reached T max .If not, go to Step 2.2.Otherwise, the algorithm stops and the optimal harmonic threshold solution set is output and noted as x * .
Step 3: Use the optimal harmony threshold solution set x * to divide the apple image and output the segmented image.
The apple image recognition multi-objective method based on an adaptive harmony search algorithm with simulation and creation process is shown in Figure 1.
Information 2018, 9, x 7 of 24 Step 2.2: Construct a Pareto optimal solution set, and note the harmonic threshold deleted from the optimal solution set as , .
Step 2.3: Each tone component , , 1,2, ⋯ , of the harmony threshold , is updated according to the SC-AHS algorithm step.
Step 2.4: Determine whether the number of evolutions has been reached .If not, go to Step 2.2.Otherwise, the algorithm stops and the optimal harmonic threshold solution set is output and noted as * .
Step 3: Use the optimal harmony threshold solution set * to divide the apple image and output the segmented image.
The apple image recognition multi-objective method based on an adaptive harmony search algorithm with simulation and creation process is shown in Figure 1.

Experiment Design
The experimental samples were chosen from the mature HuaNiu apples.A series of mature HuaNiu apple image segmentation experiments are conducted under six conditions of direct sunlight with strong, medium, and weak illumination, and backlighting with strong, medium, and weak illumination in a natural scene.The experiment contains two parts.Firstly, the benchmark functions are used to test the optimization performance of the SC-AHS algorithm.Secondly, the ARMM-AHS method is used to segment and identify the apple image.The experiment used a Lenovon (Beijing, China) Notebook computer with an Intel Core i5 CPU 2.6 GHz and 4.0 GB of memory.The test platform was Microsoft Visual C++ 6.0 and Matlab 2012.

Benchmark Function Comparison Experiment and Result Analysis
Five benchmark functions, including Sphere, Rastrigrin, Griewank, Ackley, and Rosenbrock, were used to simulate the individual fitness of harmony.The parameters of each function are shown in Table 1 [40].

Fixed Parameters Experiment
The fixed parameters experiment uses a minimum value optimization simulation to compare and test the optimization performance of the HS, Improved Harmony Search (IHS) [7], Global-best Harmony Search (GHS) [41], Self-adaptive Global best Harmony Search (SGHS) [24], and SC-AHS algorithms.
In the fixed parameter experiment, the experimental parameters were set as follows.HMS = 200, HMCR = 0.9, PAR = 0.3, BW = 0.01, T max = 20,000, N = 30.The HCMR is the Harmony memory library probability, the PAR is the tone fine-tuning probability, the BW is the tone fine-tuning bandwidth, and T max is the number of evolutions.In the IHS algorithm, the pitch tune probability minimum value PAR min = 0.01, the pitch tuning probability maximum value PAR max = 0.99, the pitch tune bandwidth minimum value BW min = 0.0001, and the pitch tune bandwidth maximum value BW max = 1/(20 × (UB − LB)) [7].In the GHS algorithm, PAR min = 0.01, PAR max = 0.99 [41].In the SGHS algorithm, the initial probability value of the harmony memory is HMCR m = 0.98, the standard deviation of the value of the harmony memory is HMCR s = 0.01, and the probability of the harmony memory is normally distributed within [0.9, 1.0], the average initial value of the bandwidth is PAR m = 0.9, the standard deviation of the pitch tune bandwidth PAR s = 0.05, the pitch tune bandwidth is normally distributed within [0.0, 1.0], the specific evolution numbers LP = 100 [24], the pitch tune bandwidth minimum BW min = 0.0001, and the pitch tune bandwidth maximum BW max = 1/(20 × (UB − LB)) [7].Upper Bound (UB) is the maximum value of the search range of the algorithm, and Lower Bound (LB) is the minimum value of the search range of the algorithm.In the SC-AHS algorithm, α init = 0.1, α f inal = 1.2.The final test results were averaged after 30 independent runs.The performance evaluation of the algorithm uses the following methods: (1) the method of convergence accuracy and speed of the analysis algorithm under a fixed evolutionary frequency; and (2) the method of comparing the evolution curve of the average optimal value of each function.
The convergence accuracy of each algorithm is shown in Table 2.For both the unimodal Sphere (f 1) and Rosenbrock (f 5) functions, the data in Table 2 shows that the algorithm has reached the theoretical minimum of 0 for both functions.It shows that the convergence accuracy of this algorithm is very high, and the convergence speed and accuracy of other functions are slightly smaller than the SC-AHS algorithm.For the multimodal Rastrigrin (f 2) function, Table 2 shows that the SC-AHS algorithm has reached the theoretical minimum of 0. It indicates that the algorithm has high convergence accuracy, and the convergence accuracy of other functions does not reach the theoretical minimum.For multimodal Griewank (f 3) and Ackley (f 4) functions, the data in Table 2 shows that the GHS algorithm reached the theoretical minimum value of 0 as it reaches the 5000 iterations in the Griewank (f 3) function optimization.At the same time, the convergence accuracy of the GHS algorithm in the Ackley (f 4) function optimization is also better than other algorithms, indicating that for these two functions, the SC-AHS algorithm converges faster, and the convergence accuracy is better than the other algorithms, except the GHS algorithm.By comparing the minimum values of the HS, HIS [7], GHS [41], and SGHS [24] algorithms, and the SC-AHS algorithm with fixed parameters, the results show that for unimodal Sphere (f 1) and Rosenbrock (f 5) functions and multimodal Rastrigrin (f 2) function, the SC-AHS algorithm has better performance than the other algorithms.For the multimodal Grievenk (f 3) and Ackley (f 4) functions, the SC-AHS algorithm converges faster than other algorithms.In addition to the GHS algorithm, the SC-AHS algorithm has better convergence accuracy than other algorithms.For each function, compared with the HS algorithm, the convergence accuracy and speed of the SC-AHS algorithm are significantly improved.Compared with other algorithms, the SC-AHS algorithm has the advantage of fast convergence.

Dynamic Parameters Experiment
The dynamic parameters experiment uses two test methods, including varying the number of harmony tone components and varying the harmony memory size, HMS.Under the condition of dynamic parameters, the optimization performance effect of the above algorithms, like HS, HIS [7], GHS [41], SGHS [24], and the SC-AHS is compared.The average value of each function, the relative change rate [42], and the average change value were used as measures of the performance of the algorithm.

Test of Average Optimum with Different Harmony Tone Components of Five Functions
The number of harmony tone components corresponds to the dimension of the function.High-dimensional, multimodal, complex optimization problems can easily cause a local optimum and premature convergence.For this reason, it is extremely necessary to adjust the dimension of the benchmark function (i.e., the number of tone components) and test the optimization performance of each algorithm in high dimensions.The tone components are in the range of 50 to 100, and the other parameters are unchanged.The comparison of the changes in the number of tone components for the optimization performance of each algorithm is shown in Table 3.The average optimal value of each function varied with the number of tone components is shown in Figures 2-6.In Table 3, there is: Relative rate o f change = |average optimal value o f the 50 tone components−average optimal value o f the 100 tone components| average optimal value o f the 50 tone components ×100% Average change = ∑ 6 i=1 average optimal value o f di f f erent numbers o f tone components 6 × 100% (16) It can be seen from the above that for the uni-peak Sphere (f 1) and Rosenbrock (f 5) functions, the average optimal value of each algorithm has an increasing trend with the increase of the number of harmonic components, but the SC-AHS algorithm shows a decline.In the trend, the relative change rate of the SC-AHS algorithm in Table 3 is also the lowest.For the multi-peak Rastrigrin (f 2), Griewank (f 3), and Ackley (f 4) functions, the average optimal value of the SC-AHS algorithm varies unstably.The average change value of the SC-AHS algorithm in Table 3 is the lowest among the algorithms.Therefore, the SC-AHS algorithm does not show a significant decrease in convergence accuracy with the increase in the number of tone components.That is, the algorithm still has good convergence performance in high-dimensional cases.

Test of the Average Optimum with Different Harmony Memory Sizes of Five Functions
The population diversity can be affected with the changes of the size of the harmony memory HMS.The range of parameters HMS is 20, 50, 100, 150, and 200.Other parameters are unchanged.The effect of the change in the size of the harmony memory on the optimization performance of each

Test of the Average Optimum with Different Harmony Memory Sizes of Five Functions
The population diversity can be affected with the changes of the size of the harmony memory HMS.The range of parameters HMS is 20, 50, 100, 150, and 200.Other parameters are unchanged.The effect of the change in the size of the harmony memory on the optimization performance of each algorithm is shown in Table 4.The comparison of the average optimum value of each function changed with the size of the harmony memory as showed in Figures 7-11.

Test of SC-AHS Algorithm with Dynamic Parameters Experiment
The experiment is mainly to verify that the SC-AHS algorithm has fast convergence performance.The experiment still adopts the conditions of changing parameters, and still adopts two test methods, including varying the number of harmonic components and varying the harmonic memory size, HMS.The effect of changing the parameters under test on the convergence speed of the SC-AHS algorithm is studied.The average optimal value of each function and the number of evolutions are used as a measure of the performance of the SC-AHS algorithm.

Test of Average Optimum with Different Harmony Tone Components of the SC-AHS Algorithm
The range of the number of harmonic components, N, is from 50 to 100.The other parameters are unchanged.The effect of the change in the number of harmonic components on the convergence rate of the SC-AHS algorithm is shown in Table 5.The comparison of the change of the average optimum value and the sum of the tone components of each function is shown in Figure 12.It can be seen that with the increase in the number of tone components, there is no apparent upward trend in the average optimal value of the functions.Table 5 shows that the number of evolutions of the SC-AHS algorithm is less than 20,000 for single-peak Sphere (f1) and Rosenbrock (f5) functions, as well as multi-peak Grievenk function (f3), and the number of tone components is different, indicating that the SC-AHS algorithm still has a high convergence speed in the high-dimensional case.In Table 4, there is: Relative rate o f change = |average optimal value o f 20 f or the harmony memory−average optimal value o f 200 f or the harmony memory| average optimal value o f 20 f or the harmony memory Average change = ∑ 5 i=1 average optimal value o f di f f erent sizes o f harmony memories 5 × 100% ( It can be seen that for the single-peak Sphere (f 1) and Rosenbrock (f 5) functions and the multi-peak Rastrigrin (f 2) function, the average value of each algorithm has an increasing trend with the increase of the size of the harmony memory, but the SC-AHS algorithm shows a significant downward trend.The relative change rate of the SC-AHS algorithm in Table 4 is higher than other algorithms, indicating that with the increase of the size of the harmony memory, the diversity of the population is maintained and the convergence accuracy is improved.For the multi-peak Grievenk (f 3) and Ackley (f 4) functions, the average optimal value of the SC-AHS algorithm is not stable.The average change value of the SC-AHS algorithm in Table 4 is the lowest among the algorithms, and the relative change rate of the algorithm is higher than other algorithms.Therefore, the SC-AHS algorithm increases the convergence accuracy of the algorithm with the increase of the size of the harmonic memory, and it has better convergence than other algorithms.

Test of SC-AHS Algorithm with Dynamic Parameters Experiment
The experiment is mainly to verify that the SC-AHS algorithm has fast convergence performance.The experiment still adopts the conditions of changing parameters, and still adopts two test methods, including varying the number of harmonic components and varying the harmonic memory size, HMS.The effect of changing the parameters under test on the convergence speed of the SC-AHS algorithm is studied.The average optimal value of each function and the number of evolutions are used as a measure of the performance of the SC-AHS algorithm.

Test of Average Optimum with Different Harmony Tone Components of the SC-AHS Algorithm
The range of the number of harmonic components, N, is from 50 to 100.The other parameters are unchanged.The effect of the change in the number of harmonic components on the convergence rate of the SC-AHS algorithm is shown in Table 5.The comparison of the change of the average optimum value and the sum of the tone components of each function is shown in Figure 12.It can be seen that with the increase in the number of tone components, there is no apparent upward trend in the average optimal value of the functions.Table 5 shows that the number of evolutions of the SC-AHS algorithm is less than 20,000 for single-peak Sphere (f 1) and Rosenbrock (f functions, as well as multi-peak Grievenk function (f 3), and the number of tone components is different, indicating that the SC-AHS algorithm still has a high convergence speed in the high-dimensional case.

Test of Average Optimum with Different Harmony Memory Sizes of the SC-AHS Algorithm
The value range of the HMS memory size is also 20, 50, 100, 150, and 200, respectively, and the other parameter settings are unchanged.The effect of the change in the size of the harmony memory on the convergence rate of the SC-AHS algorithm is shown in Table 6.The comparison of the average optimum value of each function with the change of the size of the harmony memory is shown in Figure 13.It can be seen that, as the size of the harmonic memory increases, the average optimal value of each function has a more obvious downward trend.This is because the size of the harmonic memory increases, which increases the diversity of the population and the search performance of the algorithm.Table 6 shows that the number of SC-AHS evolutions is lower than 20,000 for single-peak Sphere (f1), Rosenbrock (f5), multi-peak Rastrigrin (f2), and Griewank (f3) functions.It shows that the SC-AHS algorithm has increased the convergence speed when the harmony memory is large.

Test of Average Optimum with Different Harmony Memory Sizes of the SC-AHS Algorithm
The value range of the HMS memory size is also 20, 50, 100, 150, and 200, respectively, and the other parameter settings are unchanged.The effect of the change in the size of the harmony memory on the convergence rate of the SC-AHS algorithm is shown in Table 6.The comparison of the average optimum value of each function with the change of the size of the harmony memory is shown in Figure 13.It can be seen that, as the size of the harmonic memory increases, the average optimal value of each function has a more obvious downward trend.This is because the size of the harmonic memory increases, which increases the diversity of the population and the search performance of the algorithm.Table 6 shows that the number of SC-AHS evolutions is lower than 20,000 for single-peak Sphere (f 1), Rosenbrock (f 5), multi-peak Rastrigrin (f 2), and Griewank (f 3) functions.It shows that the SC-AHS algorithm has increased the convergence speed when the harmony memory is large.

Apple Image Segmentation Experiment
The experiment uses a single apple image as input, and takes the ARMM-AHS method proposed in this paper as the threshold search strategy of images to perform apple image segmentation experiments.The maximum class variance function and the maximum entropy function are used as the two objective fitness functions of the multi-objective optimization method.The main interface of the apple image segmentation program developed with Matlab2012 GUI is shown in Figure 14.

Apple Image Segmentation Experiment
The experiment uses a single apple image as input, and takes the ARMM-AHS method proposed in this paper as the threshold search strategy of images to perform apple image segmentation experiments.The maximum class variance function and the maximum entropy function are used as the two objective fitness functions of the multi-objective optimization method.The main interface of the apple image segmentation program developed with Matlab2012 GUI is shown in Figure 14.
Five segmentation indicators based on the image recognition rate, such as the segmentation rate (R Seg ), fruit segmentation rate (R App ), branch segmentation rate (R Bra ), segmentation success rate (R Suc ), and missing rate (R Mis ), are respectively defined in this paper.The equations of the five indicators are defined in Equations ( 19)-( 23).Among them, N to is the total number of segmentation targets (including the number of fruits and the number of branches), N po is the total number of targets obtained by manual visualization (including the number of fruits and the number of branches), N a is the total number of segmentation fruits, N pa is the total number of fruits obtained by manual visualization, and N l is the total number of segmentation branches.Due to the effects of the illumination condition and interference factors, the experiment sets the fruit size that is greater than 1/4 that of complete fruit as a fruit statistics value, and sets the branch length that is greater than 1/4 that of a complete branch as a branch of the statistics value, and fruits that are too small in the perspective of the image are not counted in the statistics values.
Figure 13.Comparison of the average optimum with different harmony memory sizes of five functions.

Apple Image Segmentation Experiment
The experiment uses a single apple image as input, and takes the ARMM-AHS method proposed in this paper as the threshold search strategy of images to perform apple image segmentation experiments.The maximum class variance function and the maximum entropy function are used as the two objective fitness functions of the multi-objective optimization method.The main interface of the apple image segmentation program developed with Matlab2012 GUI is shown in Figure 14.show that the multi-objective optimization method can be used to obtain a set of non-dominated threshold solutions, which can provide more opportunities to select thresholds.The thresholds of the non-dominated solution set under six illumination conditions using the ARMM-AHS method is shown in Table 7.The number of non-dominated solutions under six illumination conditions are different.

Conclusions
This paper proposes a new adaptive harmony search algorithm with simulation and creation, which expands the search space to maintain the diversity of the solution and accelerates the convergence of the algorithm compared with HS, HIS, GHS, and SGHS algorithms.An apple image recognition multi-objective method based on the adaptive harmony search algorithm with simulation and creation is proposed.The multi-objective optimization recognition method transforms the apple image recognition problem collected in the natural scene into a multi-objective optimization problem, and uses the new adaptive harmony search algorithm with simulation and creation as the image threshold search strategy.The proposed multi-objective optimization recognition method obtains a set of non-dominated threshold solution sets, which is more flexible than the OTSU algorithm in the opportunity of threshold selection.The selected threshold has better adaptive characteristics and has good image segmentation results.

3. 1 . 1 . 2 ,
Adaptive Factor of Harmony Tone Definition 1 (Adaptive factor of harmony tone).Randomly generate Harmony Memory whose Size is HMS to find out the global optimal harmony x • • • , x now N

Figure 1 .
Figure 1.Flowchart of the apple image recognition multi-objective method based on an adaptive harmony search algorithm with simulation and creation.

Figure 1 .
Figure 1.Flowchart of the apple image recognition multi-objective method based on an adaptive harmony search algorithm with simulation and creation.

Figure 2 .
Figure 2. Comparison of the average optimum with different harmony tone components of the Sphere function.

Figure 3 .
Figure 3.Comparison of the average optimum with different harmony tone components of the Rastrigrin function.

Figure 2 .
Figure 2. Comparison of the average optimum with different harmony tone components of the Sphere function.

Figure 2 .
Figure 2. Comparison of the average optimum with different harmony tone components of the Sphere function.

Figure 3 .
Figure 3.Comparison of the average optimum with different harmony tone components of the Rastrigrin function.

Figure 3 .
Figure 3.Comparison of the average optimum with different harmony tone components of the Rastrigrin function.

Figure 4 .
Figure 4. Comparison of the average optimum with different harmony tone components of the Griewank function.

Figure 5 .
Figure 5.Comparison of the average optimum with different harmony tone components of the Ackley function.

Figure 6 .
Figure 6.Comparison of the average optimum with different harmony tone components of the Rosenbrock function.Test of the Average Optimum with Different Harmony Memory Sizes of Five Functions The population diversity can be affected with the changes of the size of the harmony memory HMS.The range of parameters HMS is 20, 50, 100, 150, and 200.Other parameters are unchanged.The effect of the change in the size of the harmony memory on the optimization performance of each

Figure 4 . 24 Figure 4 .
Figure 4. Comparison of the average optimum with different harmony tone components of the Griewank function.

Figure 5 .
Figure 5.Comparison of the average optimum with different harmony tone components of the Ackley function.

Figure 6 .
Figure 6.Comparison of the average optimum with different harmony tone components of the Rosenbrock function.

Figure 5 . 24 Figure 4 .
Figure 5.Comparison of the average optimum with different harmony tone components of the Ackley function.

Figure 5 .
Figure 5.Comparison of the average optimum with different harmony tone components of the Ackley function.

Figure 6 .
Figure 6.Comparison of the average optimum with different harmony tone components of the Rosenbrock function.Test of the Average Optimum with Different Harmony Memory Sizes of Five Functions The population diversity can be affected with the changes of the size of the harmony memory HMS.The range of parameters HMS is 20, 50, 100, 150, and 200.Other parameters are unchanged.The effect of the change in the size of the harmony memory on the optimization performance of each

Figure 6 .
Figure 6.Comparison of the average optimum with different harmony tone components of the Rosenbrock function.

Figure 7 .
Figure 7.Comparison of the average optimum with different harmony memory sizes of the Sphere function.

Figure 7 .
Figure 7.Comparison of the average optimum with different harmony memory sizes of the Sphere function.

Figure 7 .
Comparison of the average optimum with different harmony memory sizes of the Sphere function.

Figure 8 .
Figure 8.Comparison of the average optimum with different harmony memory sizes of the Rastrigrin function.

Figure 9 .
Figure 9.Comparison of the average optimum with different harmony memory sizes of the Griewank function.

Figure 8 .
Figure 8.Comparison of the average optimum with different harmony memory sizes of the Rastrigrin function.

Figure 7 .
Figure 7.Comparison of the average optimum with different harmony memory sizes of the Sphere function.

Figure 8 .
Figure 8.Comparison of the average optimum with different harmony memory sizes of the Rastrigrin function.

Figure 9 .
Figure 9.Comparison of the average optimum with different harmony memory sizes of the Griewank function.

Figure 9 .
Figure 9.Comparison of the average optimum with different harmony memory sizes of the Griewank function.Information 2018, 9, x 15 of 24

Figure 10 .
Figure 10.Comparison of the average optimum with different harmony memory sizes of the Ackley function.

Figure 10 .
Figure 10.Comparison of the average optimum with different harmony memory sizes of the Ackley function.

Figure 11 .
Figure 11.Comparison of the average optimum with different harmony memory sizes of the Rosenbrock function.

Figure 11 .
Figure 11.Comparison of the average optimum with different harmony memory sizes of the Rosenbrock function.

5 fFigure 12 .
Figure 12.Comparison of the average optimum with different harmony tone components of five functions.

Figure 12 .
Figure 12.Comparison of the average optimum with different harmony tone components of five functions.

Figure 13 .
Figure 13.Comparison of the average optimum with different harmony memory sizes of five functions.

Figure 13 .
Figure 13.Comparison of the average optimum with different harmony memory sizes of five functions.

Figure 14 .
Figure 14.Main interface of the apple image segmentation program.Figure 14.Main interface of apple image segmentation program.

Figure 14 .
Figure 14.Main interface of the apple image segmentation program.Figure 14.Main interface of apple image segmentation program.

Figures 15 -
show the comparison of the results of image segmentation under three illumination conditions, including direct sunlight with strong illumination, backlighting with strong illumination, and backlighting with medium illumination.The figures show the original image, the preprocessed image, the Pareto optimal frontier fitness curve using the SC-AHS optimization algorithm, the ARMM-AHS method segmentation result, and the OTSU algorithm segmentation result.Figures15-17show that the multi-objective optimization method can be used to obtain a set of non-dominated threshold solutions, which can provide more opportunities to select thresholds.The thresholds of the non-dominated solution set under six illumination conditions using the ARMM-AHS method is shown in Table7.The number of non-dominated solutions under six illumination conditions are different.

Figure 15 .
Figure 15.Comparison of the results image segmentation under direct sunlight under the strong illumination condition.

Figure 16 .
Figure 16.Comparison of the results of image segmentation under backlighting under the strong illumination condition.

Figure 15 . 24 Figure 15 .
Figure 15.Comparison of the results of image segmentation under direct sunlight under the strong illumination condition.

Figure 16 .
Figure 16.Comparison of the results of image segmentation under backlighting under the strong illumination condition.Figure 16.Comparison of the results of image segmentation under backlighting under the strong illumination condition.

Figure 16 .
Figure 16.Comparison of the results of image segmentation under backlighting under the strong illumination condition.Figure 16.Comparison of the results of image segmentation under backlighting under the strong illumination condition.

Figure 17 .
Figure 17.Comparison of results of image segmentation under backlighting under the medium illumination condition.

Figure 17 .
Figure 17.Comparison of the results of image segmentation under backlighting under the medium illumination condition.
Determine if the number of evolutions has been reached T max .If not, go to Step 2. Otherwise, the algorithm stops and outputs x g .

Table 1 .
Parameters of five benchmark functions.

Table 2 .
Comparison of the results of fixed global evolution times.

Table 3 .
Comparison of the optimal performance with different harmony tone components of five functions.

Table 4 .
Comparison of the optimal performance with different harmony memory sizes of five functions.

Table 5 .
Influences of convergence performance with different harmony tone components of the SC-AHS algorithm.

Table 6 .
Comparisons of the convergence performance with different harmony memory sizes of the SC-AHS algorithm.

Table 7 .
Thresholds of the non-dominated solution set under six illumination conditions using the ARMM-AHS method.

Table 7 .
Thresholds of the non-dominated solution set under six illumination conditions using the ARMM-AHS method.

Table 8 .
Comparison of thresholds selected from the non-dominated solution set under illumination conditions with OTSU thresholds.

Table 9 .
Apple image segmentation effect comparison under six illumination conditions.