Next Article in Journal
Risk Measurement of Stock Markets in BRICS, G7, and G20: Vine Copulas versus Factor Copulas
Previous Article in Journal
Sharpe’s Ratio for Oriented Fuzzy Discount Factor
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Makeup Transfer via Bat Algorithm

Complex System and Computational Intelligent Laboratory, Taiyuan University of Science and Technology, Taiyuan 030024, China
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(3), 273; https://doi.org/10.3390/math7030273
Submission received: 1 February 2019 / Revised: 8 March 2019 / Accepted: 12 March 2019 / Published: 17 March 2019
(This article belongs to the Section Mathematics and Computer Science)

Abstract

:
With the advent of the artificial intelligence (AI) era, the beauty camera is widely used, and makeup transfer has attracted increasing attention. In this paper, we propose an adaptive makeup transfer based on the bat algorithm to solve the problem that only a single makeup effect can be transferred. According to the characteristics of makeup style, the algorithm optimizes the weight value to get the appropriate makeup lightness by using the adaptive method. The improved algorithm can not only help to get the optimal weight values in the process of transferring the same makeup style to different targets, but also to transfer different makeup styles to the same target. Moreover, this algorithm can choose the most suitable makeup style and also the most appropriate lightness for a certain person. Experimental results show that the algorithm proposed in the paper has a better effect than the existing algorithm of makeup transfer, and the algorithm can provide users with a suitable makeup style and appropriate lightness.

1. Introduction

Face makeup is a technique to beautify one’s appearance with special cosmetics, such as foundation, eye shadow, blusher etc. At present, there are many commercial facial makeup systems in the market, such as MEITU XIUXIU and TAZZ. MEITU XIUXIU provides users with unique acne dermabrasion, face-lift, whitening, eye zoom and other intelligent beauty functions. Virtual makeover TAAZ offers a trial of some pre-prepared cosmetic elements, such as lipsticks and eye liners. However, these systems require a user’s manual manipulation and provide the user with only a certain number of fixed makeup styles. It is a trend that people always follow the makeup of a well-known celebrity as a model, for such stars play a leading role in makeup fashion. Before people wear the makeup of the star, it would be extremely helpful if they can preview the makeup effects on their own face. For now, there are two ways of achieving this. One is to physically apply the makeup, which is time-consuming and requires the skill. Another is to try on makeup digitally by way of digital photography, but using such software relies heavily on the user’s expertise. In addition, because of variant skin colors, the makeup look that suits one person is invariably different to another. Although in the same makeup look, the makeup lightness that suits one person is also different. It is very convenient and practical for users if there is a digital face makeup for example which can create face makeup for a face image by using another image as the style example, to automatically get the appropriate makeup and its lightness.
The existing methods of makeup transfer mainly include two categories: traditional image processing approaches [1,2,3,4,5] and deep learning [6,7,8] methods. Image processing approaches generally decompose images into several layers and transfer information from each layer after warping the example face image to a corresponding layer of the non-makeup one. This method requires warping the example image to the subject image, which may result in the output image looking unnatural. Deep learning methods usually adopt several independent networks to deal with each cosmetic individually, which requires a large number of data sets. However, there is no public face makeup library. Our approach is inspired by the work of Guo et al. [1]. The approach of Guo adopts the idea of image stratification and decomposes the image into three layers separately: face structure layer, containing the structure of every face component, such as the eyes, noses, mouth; skin detail layer, which contains the skin texture, including flaws, moles, wrinkles, and color layer, which represents color alone. Thereafter, they transfer information from each layer of one image to a corresponding layer of the other image, thereby automatically transferring the makeup effect. And the resultant color is an alpha-blending of color layers of the example image and the target image; the weight of color transfer is not given in detail in this method. However, the makeup style for one person is different to that which works for another. Similarly, different people require different levels of makeup lightness. In conclusion, existing studies focused on improving the way to transfer makeup to make the result more natural, without adding personalized recommendations in the process of makeup transfer, such as a recommended suitable makeup, or recommended suitable lightness for a specific makeup.
To address the above issues, we improved the original makeup transfer model and proposed an adaptive makeup transfer method based on the Bat Algorithm [9]. Our approach takes color weight value as the variable and beauty of face detection in Baidu AI as the algorithm’s adaptive value. After continuously optimizing weight value by the bat algorithm, we can get the optimal weight value for a specific makeup. Experimental results show that the improved algorithm not only gets the optimal makeup weight value in the process of transferring the same makeup to different target faces and different makeup styles to the same face, but also reflects the adaptive effect whether the lightness of a makeup is the most appropriate for the face. What’s more, our method can be applied in the makeup recommendation system of a beauty salon, which can provide users with personalized makeup recommendations, and can also be applied in beauty cameras to provide users with appropriate makeup and the lightness of makeup, rather than just providing a fixed single makeup style.
To sum up, the main contributions are the following:
(1) We present an adaptive makeup transfer method which can provide the most appropriate makeup and its lightness for someone.
(2) We take the beauty of face detection in Baidu AI as the evaluation standard of makeup.
(3) To the best of our knowledge, it is the first makeup transferring method based on optimization algorithm and can produce better results.
The paper is structured as follows: after reviewing previous work in Section 2, we describe makeup transfer method in Section 3, and then we describe our approach to build a makeup model in Section 4. Section 5 includes the experiment and its discussion. Section 6 is the summary and future work.

2. Related Work

There is little research in the field of makeup transfer. The earliest research closely related to makeup transfer was made by Tong et al. [2] who proposed an image-based technique to achieve cosmetic transfer. In their study, they needed before-and-after example images created by professional makeup artists, and realistically transfer the cosmetic style captured in the example-pair to another person’s face. One disadvantage of this technique is that it needs before-after makeup pairs, but providing the image pair is rather difficult in most cases. In addition, their method does not change the texture of facial skin, however, it is necessary to conceal the original skin texture and introduce a new one, which is more in line with people’s needs of makeup. In contrast, our approach allows people to choose whether to introduce a new skin texture or not. Different from the method mentioned above, Guo et al. [1] introduce an approach to creating face makeup upon a face image with another image as the style example, they decompose the before-makeup and reference faces into three layers, and transfer information between the corresponding layers. It is worth mentioning that only an “after” image is used as a reference in their work. This method makes the application of facial makeup more flexible and practical. Li et al. [3] introduce another image decomposition method. This method simulates makeup in a photo by manipulating its intrinsic image layers according to proposed adaptations of physically-based reflectance models. One disadvantage is that it relies heavily on the accurate decomposition of intrinsic image layers. Scherbaum et al. [4] choose a 3D morphable face model to facilitate facial makeup in their research. As makeup representation captures a change in reflectance and scattering, it will synthesize faces with makeup in novel 3D views and novel lighting with high vraisemblance. This method also needs before-after makeup pairs. Liu et al. [5] propose a Beauty e-Experts system for automatic facial hairstyle and makeup recommendation and synthesis. It is the first study to investigate a fully automatic hairstyle and makeup system that simultaneously deals with hairstyle and makeup recommendation and synthesis.
All makeup transfer works above are based on traditional methods, while with the development of deep learning [10,11,12], there are some new methods for makeup transfer. Gatys et al. [6] introduce a parametric texture model based on a deep neural network [13] which can synthesize high-quality neural textures. Liu et al. [7] propose a deep localized makeup transfer network to automatically transfer the makeup from a reference face to a before-makeup face. It is the first makeup transferring method based on a deep learning framework, and the proposed method has five advantages: with complete function, cosmetic specific, localized, producing naturally looking results and controllable makeup lightness. This method sets three makeup weights from light makeup to dark makeup, which can generate after-makeup face with various makeup lightness. The key difference between their work and ours is that our weight is a fixed value, which is adaptive and can provide users with personalized guidance when they wear different makeups. Li et al. [8] propose a dual input/output BeautyGAN for instance-level facial makeup transfer, and introduce pixel-level histogram loss to constrain the similarity of makeup style.
In these studies, they focused on how to produce naturally looking results without obvious artifacts, and ignored personalized recommendations in the process of makeup transfer. Even if some researchers provided controllable makeup lightness, it still required the user to choose the makeup lightness, which may be difficult for the user. To address this problem, we propose an adaptive makeup transfer based on the bat algorithm. The biggest highlight of this paper is to abandon the idea that scholars blindly improve makeup transfer methods with different ways, and change the makeup transfer model to provide the suitable makeup style and the appropriate lightness for a certain person.
Optimization algorithms [14,15,16,17,18] have a long history of development and used to solve complex optimization problems [19,20,21,22] frequently. Zhang et al. [23] proposed a hybrid multi-objective cuckoo search with dynamical local search to solve numerical optimization problems. Wang et al. [24] proposed a multi-objective DV-Hop localization algorithm based on NSGA-II to solve the sensor node localization problem in WSNs. Cao et al. [25] employed the MOCS for software defect prediction problem. Optimization algorithm plays a very good role in practical application and has been very mature in solving practical problems [26,27,28]. Therefore, we can use optimization algorithm to solve the problem that only single makeup effect can be transferred in makeup transfer.

3. Makeup Transfer

In this section, we introduce a makeup transfer method to realize facial makeup transfer between a target image and an example image. The target image named I is a face image to be made up and the example image called E provides makeup example. The result image R, in which the face structure of I is retained while the makeup style from E is applied. The symbols and explanations used in the makeup transfer are listed as shown in Table 1.

3.1. Face Alignment

Since the information is transmitted pixel by pixel, face alignment of the target image and the example image is required. In our approach, the makeup transfer of each facial component consists of two phases, the first is to decide where to apply the cosmetics, and the second is to decide what color to apply in that region. For the first phase, a mask determines what facial region to apply makeup. The thin plate spline (TPS) [29] warps the example image to the target image for a normalized result, described in Figure 1a. TPS warping uses facial feature points, and we adopt the active shape model (ASM) [30] to obtain these points. We used 83 facial feature points for each face in our method, and the facial feature points of the target image are shown in Figure 1a.
These points define different face components: eyebrows, eyes, nose, lip, mouth cavity, and other facial skin, as shown in Figure 1c. We defined each region as follows: lip as C 1 , eyes and mouth cavity as C 2 , C 3 is the skin region, which is the entire face excluding C 1 and C 2 . These regions are treated in different ways during makeup. Because the skin texture of C 1 is different for everyone, we use a special method to handle the makeup transfer of this region, discussed in Section 3.3. C 2 is kept untouched all the time in the process of makeup transfer. The makeup transfer of the skin region ( C 3 ) is explained in detail later. For the second phase, we discuss the alpha-blending of color in Section 3.4.

3.2. Layer Decomposition

The target image I and after-warping example image E are first decomposed into color layer and lightness layer by converting them to CIELAB color space [31,32], the channel L is lightness layer and a , b channel are considered as color layer. We decompose the lightness layer into face structure layer and skin detail layer. The weighted-least-squares (WLS) [33] operator is used to perform an edge-preserving smoothing on the lightness layer to obtain the face structure layer, and then the face structure layer is subtracted from the lightness layer to obtain the skin detail layer.
Assuming that lightness layer and face structure layer are represented by l and s , the WLS method seeks to minimize the energy function E , and the problem of solution s can be expressed as:
E = | s - l | 2 + λ H ( s , l ) .
The | s l | 2 is to keep similar s to l , the regularization term H ( s , l ) is trying to make as smooth as possible. λ is a constant to balance the two terms, which is 0.2 in this paper. ∇ denotes gradient editing.
The WLS operator performs the same level of smoothing all over the image [33], however, because of the particularity of the face image, we hope different regions to have different smooth levels. So, we introduce the spatial-variant coefficient β to H . And H is defined as:
H ( s , l ) = p β ( p ) ( | s x ( p ) | 2 | l x ( p ) | α + ε + | s y ( p ) | 2 | l y ( p ) | α + ε ) .
where p is the image pixel, ε is a very small constant which is preventing division by 0. { } x and { } y are { } the partial derivative of along the x and y coordinates, α is the coefficient for adjusting the effect of l on s , which is taken as 1.2 in this paper.
We hope β ( p ) changes smoothly over the image. So, we define β ( p ) as:
β ( p ) = min q ( 1 k ( q ) e ( q p ) 2 2 σ 2 ) .
where q indexes the pixel over the image. k ( q ) is 0.7 for eyebrows, 0 for skin area, and 1 for other facial components.
σ 2 = min ( h e i g h t , w i d t h ) 25 .
We get the skin detail layer d , which is defined as:
d ( p ) = l ( p ) s ( p ) .
In this way, we decompose the target image and the example image into three layers: face structure layer, skin detail layer and color layer. Then, we transfer information from each layer of the example image to corresponding layer of the target image. In this paper, we use { } s , { } d , { } c as { } ’s the face structure layer, skin detail layer and color layer.

3.3. Makeup Transfer on Skin Detail Layer

Skin detail transfer is straightforward, and the resultant skin detail layer R d is weighted sum of I d and E d :
R d = δ I I d + δ E E d .
where 0 δ I , δ E 1 , δ I and δ E are the weight value during the transfer of skin details.
In physical makeup, cosmetics on lips usually preserve the texture of lips, so the makeup of lip should be treated differently when makeup transferring. We expect the makeup effect is similar to E and the texture is similar to I, thus, we adopt a method to fill each pixel of R with pixel value from E guided by I:
M ( p ) = E ( q ˜ ) .
M is the lip region after makeup and q ˜ is the pixel value in lip region of M . Where
q ˜ = arg max q C 1 { G ( | q p | ) G ( | E ( q ) I ( p ) | ) } .
where G ( . ) denotes Gaussian function, C 1 is lip region. For | E ( q ) I ( p ) | , we use the difference of pixel values in only L channel after histogram equalization of E and I separately.

3.4. Makeup Transfer on Color Layer

The resultant color layer R c is an alpha-blending of I c and E c :
R c ( p ) = { ( 1 r ) I c ( p ) + r E c ( p ) o t h e r w i s e I c ( p ) p C 2 .
r refers to weight value to control blending effect of two color layers. The research of this paper is done on this basis. C2 is the area of eyes and mouth cavity, and it is kept untouched all the time in the process of makeup transfer.

3.5. Makeup Transfer on Face Structure Layer

Since the face structure layer contains identity information, it cannot be copied directly or blended. So, we take a gradient-based editing method, in that we add the largest change added in I s from E s . The gradient is defined as:
R s ( p ) = { E s ( p ) β ( p ) E s ( p ) > I s ( p ) I s ( p ) o t h e r w i s e .
Because the process of synthesizing the face structure layer obtained by the gradient is equivalent to solve the Poisson equation under the Dirichlet boundary condition, we adopt the continuous over-relaxation Gauss-Seidel method to deal with the Poisson equation in our study.
Finally, three resultant layers are composed together.

4. Adaptive Makeup Transfer

4.1. Standard Bat Algorithm

BA [34,35,36,37] is a heuristic intelligent algorithm [38,39] that simulates the principle of echolocation used in bat predation. BA has the advantages of simple structure, less parameters, strong robustness, easy understanding and implementation, etc. Therefore, it has attracted great attention and gradually become a hotspot in the field of computational intelligence research [23,40].
The single-objective unconstrained optimization problem is considered in this paper as Equation (11):
min f ( x ) , [ x = ( x 1 , x 2 , , x k , , x D ) E ] .
Suppose there are n virtual bats, and the i th bat: ( i =   1 ,   2 ,   3 ,   ,   N ) is represented as Equation (12):
< x i ( t ) , v i ( t ) , f r i ( t ) , A i ( t ) , r i ( t ) > .
where x i ( t ) = ( x i 1 ( t ) , x i 2 ( t ) , , x i k ( t ) , , x i D ( t ) ) and v i ( t ) = ( v i 1 ( t ) , v i 2 ( t ) , , v i k ( t ) , , v i D ( t ) ) are the position and velocity of the i th bat in generation t , respectively, with frequency f r i ( t ) , loudness A i ( t ) , and emission rate r i ( t ) as the three required parameters.
In the next generation, the velocity is updated as follows:
v i k ( t + 1 ) = v i k ( t ) + ( x i k ( t ) p k ( t ) ) f r i ( t ) .
where p ( t ) = ( p 1 ( t ) , p 2 ( t ) , , p k ( t ) , , p D ( t ) ) is the best position found thus far by the entire swarm. Equation (13) can be viewed as a combination of the inertia v i k ( t ) and the influence of p ( t ) . The frequency f r i ( t ) is calculated as follows:
f r i ( t ) = f r min + ( f r max f r min ) r a n d 1 .
where f r max is f r min are the maximum and minimum frequency values, respectively, and r a n d 1 is a random number uniformly distributed within [0, 1].
To reflect the bat decision, the position changes with some randomness. Let r a n d 2 be a random number uniformly distributed within [0, 1], if r a n d 2 < r i ( t ) is satisfied, then the i th bat will execute the following global search pattern:
x i k ( t + 1 ) = x i k ( t ) + v i k ( t + 1 ) .
Otherwise, the following local search pattern is adopted:
x i k ( t + 1 ) = p k ( t ) + ε i k A ¯ ( t ) .
where ε i k is a random number generated by a uniform distribution within [−1, 1], A ¯ ( t ) is the average loudness of all bats, and
A ¯ ( t ) = i = 1 n A i ( t ) n
After the x i k ( t + 1 ) = ( x i 1 ( t + 1 ) , x i 2 ( t + 1 ) , , x i k ( t + 1 ) , , x i D ( t + 1 ) ) is obtained by Equations (15) and (16), the new x i ( t + 1 ) can be updated as follows:
x i ( t + 1 ) = { x i ( t + 1 ) i f r a n d 3 < A i ( t ) a n d f ( x i ( t + 1 ) ) < f ( x i ( t ) ) x i ( t ) o t h e r w i s e .
where r a n d 3 is a random number generated by uniform distribution within [0, 1]. Similar to CS, Equation (18) implies that the position is updated only when the following two conditions are met: (1) a better position is obtained, and (2) the probability r a n d 3 < A i ( t ) is satisfied. If the position of the i th bat is updated, then the corresponding loudness and emission rate r i ( t + 1 ) are replaced as follows:
A i ( t + 1 ) = α A i ( t ) .
r i ( t + 1 ) = r ( 0 ) ( 1 e γ t ) .
where α > 0 and γ > 0 are two predefined parameters, and A ( 0 ) and r ( 0 ) are two initial values for the loudness and emission rate, respectively.
The pseudo code of the standard BA is listed in Algorithm 1:
Algorithm 1. Standard bat algorithm
Begin
 For each bat, initialize the position, velocity, and parameters;
While (stop criterion is met)
  Randomly generate the frequency for each bat with Equation (14)
  Update the velocity for each bat with Equation (13);
  If r a n d 2 < r i ( t )
   Update the temp position for the corresponding bat with Equation (15);
  Else
   Update the temp position for the corresponding bat with Equation (16);
  End
  Evaluate its quality/fitness;
  Re-update the position for the corresponding bat with Equation (18);
  If the position is updated
   Update the loudness and emission rate with Equations (19) and (20), respectively;
  End
  Rank the bats and save the best position;
End
 Output the best position;
End

4.2. Evaluation Standard of Makeup

Beauty makeup researchers usually adopt a qualitative analysis when evaluating makeup effects, but it has a strong subjectivity. Although Liu et al. conduct their research to evaluate makeup effects qualitatively and quantitatively, there is no specific value standard to evaluate makeup effects. Based on Baidu’s professional deep learning algorithm and massive data training, the face detection in Baidu AI can quickly detect faces and return face attributes. Further, the beauty value as one factor of face attributes can score the beauty of a given face ranging from 0 to 100, the higher the more beautiful. The paper adopts the beauty value as the evaluation criteria to get that the weight value to show the degree of makeup lightness of a certain makeup style and that what kind of makeup style is the most suitable for the person when the beauty score is the highest.

4.3. Adaptive Algorithm

In this paper, we improve the fixed weight value of Guo’s, and propose an adaptive makeup transfer based on bat algorithm. The method takes the weight value as the variable, and uses the beauty value of the face detection in Baidu AI as the fitness value, and continuously optimizes by the bat algorithm to obtain the optimal weight value, which can help to get the appropriate lightness for a certain makeup. Figure 2 shows the algorithm flow of the adaptive algorithm.

5. Experiments and Results

We set up three experiments, the contrast experiment with Guo’s, the makeup transfer of different makeup style on the same target image and the makeup transfer of the same makeup style on different target images. Then, the comparison is conducted quantitatively.

5.1. The Contrast Experiment with Guo’s

In this experiment, the target image and the example image are chosen from that of Guo’s, which are shown in Figure 3. In Guo’s paper, the fixed value 0.8 is chosen as the weight value and 52.48 as its corresponding beauty value. Weight value refers to the degree of makeup lightness. The higher the value is, the heavier the makeup is, and the lower the value is, the subtler the makeup is.
The experimental index and parameter settings are listed in Table 2.
The experimental results shown in Table 3 were conducted 5 times independently. The weight value of Guo’s is 0.8, and the beauty value is 48.75, however, the optimal weight value of the paper is maintained at around 0.05, and the beauty value is around 54.2, increased by 10.77% compared with that of Guo’s. This is due to Guo’s weight value being fixed. However, we consider the weight value as the variable and the beauty value as the fitness value of the algorithm. After continuously optimizing the weight value by the bat algorithm, and obtaining the optimal weight value for the target image. Figure 4 shows the change of the weight value and the beauty value from the old model to the improved model. It can be seen from Figure 4 that the weight value and the beauty value remain the same, indicating that the algorithm is stable.
Figure 5 is the dynamic optimization curve of the fitness function. The other experimental images of results in this paper are similar to this image, so they are not shown. Figure 5 shows that at the early stage, the algorithm conducts global search and the beauty value increases continuously, and a better weight value is gotten at last. With the algorithm performing local search later, it gets the optimal weight value and the beauty value towards stability at the same time.

5.2. The Application of Adaptive Algorithm on Different Makeup Styles

In this experiment, six representative makeups were selected as sample makeup in Table 4, Euramerica makeup style, Euramerica smoky-eyes makeup style, Asian smoky-eyes makeup style, Asian retro makeup style, Korean makeup style and Japanese makeup style. It also takes the target image in Guo’s paper as the target image of the experiment as shown in Figure 3a.
To the best of our knowledge, when the same girl wears different makeup, the appropriate makeup style is different, and the appropriate lightness is different for a certain style. Table 5 and Figure 6 show that when different styles of makeups are transferred to the same target image, the weight values corresponding to the optimal beauty values are different, which also indicates that when the same target images are given different styles of makeup, weight values change adaptively. The highest beauty value in Table 5 is 59.69, and the corresponding makeup is the Euramerica makeup style, which indicates that the target is most suitable for Euramerica makeup style among the six representative types of makeup style. At the same time, the corresponding weight value is 0.02, which indicates that the target is more suitable for the subtle lightness of this type of style. It infers to that it also suits the heavy lightness of Asian smoky-eyes makeup style for its weight value comes to the highest, 0.92. At the same time, when the target image has been transferred into different styles of makeup, the beauty value has been improved and the beauty value increased by 13.74% when the Euramerica makeup style is transferred. It can be concluded that there is no correlation between the beauty value and the weight value.

5.3. The Application of Adaptive Algorithm on Different Non-Makeup Images

In this experiment, five non-makeup images were selected as the target images, as shown in Table 6 (a) to (e), and the Korean makeup style in Table 4 (E) was used as an example to transfer. Table 7 and Figure 7 show that when transferring the same example image to the five target images, the weight values corresponding to the optimal beauty values are different, which also indicates that when the same makeup style is transferred to different non-makeup images, weight values change adaptively. That is, for a certain makeup, different people suit a different degree of lightness. As is shown in Table 6, the beauty value of (a) is increased the highest by 15.82% from 61.5 to 71.23, and it also can be inferred that the Korean makeup style is most suitable for image (a) compared with other non-make images. And when transferred to Korean makeup style, the weight value of image (e) is 0.13, so the subtle lightness is more appropriate for it.
From the reality experience, we know that when different girls wear the same makeup, their suitable lightness is not the same. After continuously optimizing the weight value by the bat algorithm, our method finds the most appropriate weight value, so the beauty value is improved compared to the unoptimized case.

5.4. Quantitative Comparisons

The quantitative comparison mainly focuses on the quality of makeup transfer and the appropriate lightness of makeup. Seven makeups were selected as sample makeup, such as makeups in Figure 3b and Table 4(A) to (F). Figure 3a and Table 6(a) to (e) were selected as the six non-makeup images. Thus we have totally 6 × 7 after-makeup results for makeup transfer method. We set two user studies, one is comparisons between our method and case of non-makeup, and another is quantitative comparisons between Guo and ours.
Table 7 shows quantitative comparisons between non-makeup and ours. A non-makeup face, an example face and the after-makeup face by our method are sent to 50 participants to compare. The participants rate the results into five degrees: much better, better, same, worse and much worse. Quantitative comparisons in Table 7 show we are much better than case of non-makeup in 53.71%.
Table 8 shows quantitative comparisons between Guo and ours. Each time, a 4-tuple, such as a non-makeup face, an example face, the after-makeup face by our method and the after-makeup face by Guo’s method, are sent to 50 participants to compare. Note that the two after-makeup faces are shown in random order. The participants rate the results into five degrees: much better, better, same, worse and much worse. It indicates our method is much better or better than Guo in 24.38% and 43.95% cases.
These two experiments demonstrate the effectiveness of our proposed adaptive makeup transfer method from the view point of the end user.

6. Conclusions

In this paper, we propose an adaptive makeup transfer based on the bat algorithm. We change the fixed weight of Guo’s to the self-adaptive weight value by which we can get the optimal weight value in the case of transferring different makeup styles on the same target and the same makeup style to different targets. This algorithm can choose not only the most suitable makeup but also the most appropriate lightness for a certain makeup. This is the first time that combines the beauty algorithm with intelligent optimization algorithm and it also achieves good results.
The limitation of the current research is that it only takes the frontal and upright face as target, and it would be practical to extend our research to side face, so in the future research, the aspect of side face would be an issue in the study of makeup transfer.

Author Contributions

Y.R. and Y.S. suggested the improving method and wrote the original draft preparation. X.J. was responsible for checking this paper. Z.S. and Z.C. provided supervision.

Funding

This work is supported by the National Natural Science Foundation of China under Grant No.61806138, No.U1636220 and No.61663028, Natural Science Foundation of Shanxi Province under Grant No.201801D121127, Scientific and Technological innovation Team of Shanxi Province under Grant No.201805D131007, PhD Research Startup Foundation of Taiyuan University of Science and Technology under Grant No.20182002.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Guo, D.; Sim, T. Digital face makeup by example. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 18 August 2009; Volume 30, pp. 73–79. [Google Scholar]
  2. Tong, W.; Tang, C.; Brown, M.S.; Xu, Y. Example-based cosmetic transfer. In Proceedings of the 15th Pacific Conference on Computer Graphics and Applications, Maui, HI, USA, 4 December 2007; Volume 20, pp. 211–218. [Google Scholar]
  3. Li, C.; Zhou, K.; Lin, S. Simulating makeup through physics-based manipulation of intrinsic image layers. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 15 October 2015; pp. 4621–4629. [Google Scholar]
  4. Scherbaum, K.; Ritschel, T.; Hullin, M.; Thormählen, T.; Blanz, V.; Seidel, H.P. Computer-suggested facial makeup. Comput. Graph. Forum 2011, 30, 485–492. [Google Scholar] [CrossRef]
  5. Liu, L.; Xing, J.; Liu, S.; Xu, H.; Zhou, X.; Yan, S. Wow! you are so beautiful today! In Proceedings of the 21st ACM International Conference on Multimedia, New York, NY, USA, 21–5 October, 2013; Volume 11, pp. 3–12. [Google Scholar]
  6. Gatys, L.A.; Ecker, A.S.; Bethge, M. A neural algorithm of artistic style. arXiv, 2015; arXiv:1508.06576. [Google Scholar] [CrossRef]
  7. Liu, S.; Ou, X.; Qian, R.; Wang, W.; Cao, X. Makeup like a superstar: Deep localized makeup transfer network. In Proceedings of the IJCAI’16 Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, New York, NY, USA, 9–15 July 2016. [Google Scholar]
  8. Li, T.; Qian, R.; Dong, C.; Liu, S.; Yan, Q.; Zhu, W.; Lin, L. Beautygan: Instance-level facial makeup transfer with deep generative adversarial network. In Proceedings of the MM 18 Proceedings of the 26th ACM International Conference on Multimedia, Seoul, Korea, 22–26 October 2018; pp. 645–653. [Google Scholar]
  9. Cai, X.; Gao, X.; Xue, Y. Improved bat algorithm with optimal forage strategy and random disturbance strategy. Int. J. Bio-Inspir. Comput. 2016, 8, 205–214. [Google Scholar] [CrossRef]
  10. Cui, Z.; Xue, F.; Cai, X.; Cao, Y.; Wang, G.; Chen, J. Detection of malicious code variants based on deep learning. IEEE Trans. Ind. Inform. 2018, 14, 3187–3196. [Google Scholar] [CrossRef]
  11. Ni, J.; Xu, X.; Ding, S.; Sun, T. An adaptive extreme learning machine algorithm and its application on face recognition. Int. J. Comput. Math. 2015, 6, 611–619. [Google Scholar] [CrossRef]
  12. Yadav, N.; Srivastava, T. Convolution backprojection algorithm for tomographic image reconstruction with contourlet transform. Int. J. Comput. Math. 2016, 7, 156–165. [Google Scholar] [CrossRef]
  13. Abbasnejad, H.; Jafarian, A. A new method based on artificial neural networks for solving general nonlinear systems. Int. J. Comput. Math. 2018, 9, 207–218. [Google Scholar] [CrossRef]
  14. Wang, H.; Wang, W.; Cui, Z.; Zhou, X.; Zhao, J.; Li, Y. A new dynamic firefly algorithm for demand estimation of water resources. Inf. Sci. 2018, 438, 95–106. [Google Scholar] [CrossRef]
  15. Wang, G.; Cai, X.; Cui, Z.; Min, G.; Chen, J. High Performance Computing for Cyber Physical Social Systems by Using Evolutionary Multi-Objective Optimization Algorithm. IEEE Trans. Emerg. Top. Comput. 2018. [Google Scholar] [CrossRef]
  16. Wang, H.; Wang, W.; Cui, L.; Sun, H.; Zhao, J.; Wang, Y.; Xue, Y. A hybrid multi-objective firefly algorithm for big data optimization. Appl. Soft Comput. 2018, 69, 806–815. [Google Scholar] [CrossRef]
  17. Wang, Y.; Wang, P.; Zhang, J.; Cui, Z.; Cai, X.; Zhang, W.; Chen, J. A Novel Bat Algorithm with Multiple Strategies Coupling for Numerical Optimization. Mathematics 2019, 7, 135. [Google Scholar] [CrossRef]
  18. Hui, W.; Wang, W.; Hui, S.; Rahnamayan, S. Firefly algorithm with random attraction. Int. J. Bio-Inspir. Comput. 2016, 8, 33–41. [Google Scholar]
  19. Yu, W.; Wang, J. A new method to solve optimisation problems via fixed point of firefly algorithm. Int. J. Bio-Inspir. Comput. 2018, 11, 249–256. [Google Scholar] [CrossRef]
  20. Mohammadi, R.; Javidan, R.; Keshtgari, M. An intelligent traffic engineering method for video surveillance systems over software defined networks using ant colony optimisation. Int. J. Bio-Inspir. Comout. 2018, 12, 173–185. [Google Scholar] [CrossRef]
  21. Parpinelli, R.S.; Plichoski, G.F.; Silva, R.S.D.; Narloch, P.H. A review of techniques for online control of parameters in swarm intelligence and evolutionary computation algorithms. Int. J. Bio-Inspir. Comput. 2019, 13, 1–20. [Google Scholar] [CrossRef]
  22. Ma, L.; Wang, X.; Shen, H.; Huang, M. A novel artificial bee colony optimiser with dynamic population size for multi-level threshold image segmentation. Int. J. Bio-Inspir. Comput. 2019, 13, 32–44. [Google Scholar] [CrossRef]
  23. Zhang, M.; Wang, H.; Cui, Z.; Chen, J. Hybrid multi-objective cuckoo search with dynamical local search. Memet. Comput. 2018, 10, 199–208. [Google Scholar] [CrossRef]
  24. Wang, P.; Xue, F.; Li, H.; Cui, Z.; Xie, L.; Chen, J. A Multi-Objective DV-Hop Localization Algorithm Based on NSGA-II in Internet of Things. Mathematics 2019, 7, 184. [Google Scholar] [CrossRef]
  25. Cao, Y.; Ding, Z.; Xue, F.; Rong, X. An improved twin support vector machine based on multi-objective cuckoo search for software defect prediction. Int. J. Bio-Inspir. Comput. 2018, 11, 282–291. [Google Scholar] [CrossRef]
  26. Yuan, F.; Chen, S.; Liu, H.; Xu, L. Artificial bee colony-based extraction of non-taxonomic relation between symptom and syndrome in TCM records. Int. J. Comput. Math. 2015, 6, 600–610. [Google Scholar] [CrossRef]
  27. Tang, H.; Sun, D. A multi-factor prediction algorithm in big data computing environments. Int. J. Comput. Math. 2016, 7, 312–322. [Google Scholar] [CrossRef]
  28. Pan, X.; Zhou, W.; Lu, Y.; Li, R. User collaborative filtering recommendation algorithm based on adaptive parametric optimisation SSPSO. Int. J. Comput. Math. 2017, 8, 580–592. [Google Scholar] [CrossRef]
  29. Bookstein, F.L. Principal warps: Thin-plate splines and the decomposition of deformations. IEEE Trans. Pattern. Anal. Mach. Intell. 1989, 11, 567–585. [Google Scholar] [CrossRef]
  30. Milborrow, S.; Nicolls, F. Locating facial features with an extended active shape model. In Proceedings of the Computer Vision—ECCV 2008; Springer: Berlin/Heidelberg, Germany, October 2008; Volume 5305, pp. 504–513. [Google Scholar]
  31. Woodland, A.; Labrosse, F. On the separation of luminance from colour in images. In Proceedings of the Institute of Mathematics and its Applications—Vision, Video and Graphics, Edinburgh, UK, January 2005; pp. 29–36. [Google Scholar]
  32. Lukac, R.; Plataniotis, K.N. Color Image Processing: Methods and Applications, 1st ed.; CRC Press: Toronto, ON, Canada, 2006; pp. 155–198. [Google Scholar]
  33. Farbman, Z.; Fattal, R.; Lischinski, D.; Szeliski, R. Edge-preserving decompositions for multi-scale tone and detail manipulation. ACM Trans. Graph. 2008, 27, 67:1–67:10. [Google Scholar] [CrossRef]
  34. Yang, X.-S. A new metaheuristic bat-inspired algorithm. In Proceedings of the Nature Inspired Cooperative Strategies for Optimization (NICSO 2010); Springer: Berlin/Heidelberg, Germany, April 2010; Volume 284, pp. 65–74. [Google Scholar]
  35. Cai, X.; Wang, H.; Cui, Z.; Cai, J.; Xue, Y.; Wang, L. Bat algorithm with triangle-flipping strategy for numerical optimization. Int. J. Mach. Learn. Cybern. 2018, 9, 199–215. [Google Scholar] [CrossRef]
  36. Cui, Z.; Li, F.; Zhang, W. Bat algorithm with principal component analysis. Int. J. Mach. Learn. Cybern. 2019, 10, 603–622. [Google Scholar] [CrossRef]
  37. Cui, Z.; Cao, Y.; Cai, X.; Cai, J.; Chen, J. Optimal LEACH protocol with modified bat algorithm for big data sensing systems in Internet of Things. J. Parallel. Distrb. Comput. 2018. [Google Scholar] [CrossRef]
  38. Cui, Z.; Wang, Y.; Cai, X. A pigeon-inspired optimization algorithm for many-objective optimization problems. Sci. China Inf. Sci. 2019, 62, 070212. [Google Scholar] [CrossRef]
  39. Wang, H.; Wang, W.; Zhou, X.; Sun, H.; Zhao, J.; Yu, X.; Cui, Z. Firefly algorithm with neighborhood attraction. Inf. Sci. 2017, 382, 374–387. [Google Scholar] [CrossRef]
  40. Cui, Z.; Sun, B.; Wang, G.; Xue, Y.; Chen, J. A novel oriented cuckoo search algorithm to improve DV-Hop performance for cyber–physical systems. J. Parallel. Distrb. Comput. 2017, 103, 42–52. [Google Scholar] [CrossRef]
Figure 1. Face alignment. (a) Warping the example image to target image, W denotes warping (b) Facial feature points on a face (c) Facial components defined by facial feature points in (b).
Figure 1. Face alignment. (a) Warping the example image to target image, W denotes warping (b) Facial feature points on a face (c) Facial components defined by facial feature points in (b).
Mathematics 07 00273 g001
Figure 2. The flow chart of the adaptive algorithm.
Figure 2. The flow chart of the adaptive algorithm.
Mathematics 07 00273 g002
Figure 3. Face makeup by example. (a) A target image; (b) An example style image.
Figure 3. Face makeup by example. (a) A target image; (b) An example style image.
Mathematics 07 00273 g003
Figure 4. Comparison between Guo and ours.
Figure 4. Comparison between Guo and ours.
Mathematics 07 00273 g004
Figure 5. Dynamic optimization curve of fitness function.
Figure 5. Dynamic optimization curve of fitness function.
Mathematics 07 00273 g005
Figure 6. The same girl wears different makeups.
Figure 6. The same girl wears different makeups.
Mathematics 07 00273 g006
Figure 7. Different girls wear the same makeup.
Figure 7. Different girls wear the same makeup.
Mathematics 07 00273 g007
Table 1. Symbol definitions.
Table 1. Symbol definitions.
SymbolDefinitions
l the lightness layer
s the face structure layer
d the skin detail layer
gradient editing
p the image pixel
λ a constant to balance | s l | 2 and H ( s , l )
ε a very small constant preventing division by zero
α the coefficient for adjusting the effect of l on s
q the pixel over the image
γ the weight value to control blending effect
δ the weight value during the transfer of skin details
Table 2. Parameter settings for bat algorithm.
Table 2. Parameter settings for bat algorithm.
Search domain [ 0 ,   1 ] D
Frequency[0.0, 1.0]
Initial A i ( 0 ) 1
Initial r i ( 0 ) 0.9
α 0.8
γ 0.9
Dimension D 1
Fitness value evaluation times1000
Population size10
Table 3. Comparison between Guo and ours.
Table 3. Comparison between Guo and ours.
Method12345Guo
Result
Weight value0.049420.062110.050800.043000.031000.8
Beauty value54.205454.270054.182254.181954.160048.75
Result Mathematics 07 00273 i001 Mathematics 07 00273 i002 Mathematics 07 00273 i003 Mathematics 07 00273 i004 Mathematics 07 00273 i005 Mathematics 07 00273 i006
Table 4. Six representative types of makeup style.
Table 4. Six representative types of makeup style.
AEuramerica makeup style
BEuramerica smoky-eyes makeup style
CAsian smoky-eyes makeup style
DAsian retro makeup style
EKorean makeup style
FJapanese makeup style
Table 5. The same girl wears different makeups.
Table 5. The same girl wears different makeups.
MethodABCDEF
Result Mathematics 07 00273 i007 Mathematics 07 00273 i008 Mathematics 07 00273 i009 Mathematics 07 00273 i010 Mathematics 07 00273 i011 Mathematics 07 00273 i012
Weight value0.020291050.160222250.920022030.080741190.382179410.23897183
Result Mathematics 07 00273 i013 Mathematics 07 00273 i014 Mathematics 07 00273 i015 Mathematics 07 00273 i016 Mathematics 07 00273 i017 Mathematics 07 00273 i018
Beauty value with makeup59.6959.5854.1958.4858.4556.83
Table 6. Different girls wear the same makeup.
Table 6. Different girls wear the same makeup.
MethodabcdeGuo
Result Mathematics 07 00273 i019 Mathematics 07 00273 i020 Mathematics 07 00273 i021 Mathematics 07 00273 i022 Mathematics 07 00273 i023 Mathematics 07 00273 i024
Beauty value without makeup61.549.7758.0854.1651.1352.48
Weight value0.963403310.758446760.920409130.827341150.132916190.38217941
Result Mathematics 07 00273 i025 Mathematics 07 00273 i026 Mathematics 07 00273 i027 Mathematics 07 00273 i028 Mathematics 07 00273 i029 Mathematics 07 00273 i030
Beauty value with makeup71.2359.0565.1160.559.7158.45
Table 7. Quantitative comparisons between non-makeup and ours.
Table 7. Quantitative comparisons between non-makeup and ours.
Much BetterBetterSameWorseMuch Worse
non-makeup53.71%34.38%9.43%2.48%0%
Table 8. Quantitative comparisons between Guo and ours.
Table 8. Quantitative comparisons between Guo and ours.
Much BetterBetterSameWorseMuch Worse
Guo24.38%43.95%21.62%8.71%1.33%

Share and Cite

MDPI and ACS Style

Ren, Y.; Sun, Y.; Jing, X.; Cui, Z.; Shi, Z. Adaptive Makeup Transfer via Bat Algorithm. Mathematics 2019, 7, 273. https://doi.org/10.3390/math7030273

AMA Style

Ren Y, Sun Y, Jing X, Cui Z, Shi Z. Adaptive Makeup Transfer via Bat Algorithm. Mathematics. 2019; 7(3):273. https://doi.org/10.3390/math7030273

Chicago/Turabian Style

Ren, Yeqing, Youqiang Sun, Xuechun Jing, Zhihua Cui, and Zhentao Shi. 2019. "Adaptive Makeup Transfer via Bat Algorithm" Mathematics 7, no. 3: 273. https://doi.org/10.3390/math7030273

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop