Next Article in Journal
Prospective Evaluation of a Thermogenic Topical Cream-Gel Containing Caffeine, Genistein, and Botanical Extracts for the Treatment of Moderate to Severe Cellulite
Previous Article in Journal
Safety Validation of Plant-Derived Materials for Skin Application
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Ethnic Differences in Women’s Perception of Simulated Facial Aging over a 15-Year Horizon: A GAN-Based Model Approach

1
L’Oréal Research and Innovation, 9 Rue Pierre Dreyfus, Clichy, 92110 Paris, France
2
Lancôme International, 92044 Levallois, France
*
Author to whom correspondence should be addressed.
Cosmetics 2025, 12(4), 154; https://doi.org/10.3390/cosmetics12040154
Submission received: 7 April 2025 / Revised: 18 June 2025 / Accepted: 3 July 2025 / Published: 21 July 2025
(This article belongs to the Section Cosmetic Dermatology)

Abstract

This study assessed the accuracy of a long-term AI-based projection of signs of facial aging and their acceptance by consumers. Standardized photographs of 25 Chinese and 25 French women were first taken at T0 and graded using ethnic-specific skin aging atlases. An AI-based algorithm then aged the photographs by 10 (T10) and 15 (T15) years. A total of 246 women from China, France, and Thailand compared these images in pairs (T0 vs. T0 + 10 or T0 + 15) and provided feedback on their overall impressions, realism, and psychological acceptance via a questionnaire. Besides lower face ptosis (p < 0.01), the simulated images revealed that regardless of ethnicity, there were no significant differences in grading. Irrespective of ethnic background, 62–78% of overall panelists found the projections realistic and liked them, while 85–96% of panelists (Chinese and French) were willing to test them. A total of 47% of Thai panelists were reluctant to try, while 4–14% found it scary. This indicated some degree of cultural influence. This study confirms women’s acceptance of future facial appearance with some degree of cultural divergence. It also highlights a valid methodology to explore skin aging for a more realistic and personalized cosmetic improvement and innovation.

1. Introduction

Previous research has attempted to clinically assess and grade the changes in facial features that occur progressively with aging, either chronologically or induced by the sun [1,2,3,4,5,6,7,8,9,10,11,12]. These studies, conducted on both genders, different ethnicities, and different ages, followed different methodologies. With the help of skin aging atlases, which grade 15 to 20 facial features individually [13,14,15,16], these previous studies include a global clinical grading performed in vivo or on standardized photographs, zoom in on some facial features, and follow their alterations with age. More recently, an AI-based algorithm was developed which is capable of automatically grading photos taken by the subjects themselves with their smartphones.
This interesting approach, not influenced by artificial lighting conditions and validated by dermatologists/experts, made it possible to extend the study to thousands of subjects of different ethnicities [1], followed by an analysis under anonymous conditions by the algorithm within seconds.
In brief, irrespective of the methodology used, all studies converged in establishing that the slow progression (by decades) of facial aging showed (i) differences between gender and ethnicity, (ii) that each facial feature possesses its own rate of increased severity along the lifespan depending on gender and ethnicity, and (iii) a better understanding of the specific impacts of solar exposure, chronic aerial pollution, or stress on some facial signs of aging [16,17].
Irrespective of the adopted methodology, all these studies graded a given facial status at a specific point in time “t” (a snapshot) across subjects of different ages (e.g., 18–70 y) as a necessary step to assess the previous impacts of different individual factors (gender, ethnic, sun exposure frequency, lifestyle, food, etc.) on the aging process. The aspect of this individual “past history” led us to consider conducting this study by using an inverse approach, i.e., to illustrate the future facial status of young and middle-aged women, 10 or 15 years after the initial “snapshot”. Such an objective was achievable through a recently developed dedicated algorithm that integrates the references of the skin aging atlases on women of different ethnicities along their lifespan.
An exploratory study was therefore undertaken, taking standardized facial photographs of women from two small cohorts of 35–49-year-old East Asian and European women as the initial facial appearances. These snapshots were further artificially modified by an AI-based algorithm at a 10–15-year horizon (2034 to 2039) to simulate the facial aging that occurs during this period. Preference was given to a 10–15-year simulation as it is applicable to all future consumers regardless of their ages, to keep a reasonable age leap regarding cultural codes, and to avoid inducing photorealism bias in consumer perception.
Such a two-fold approach first involved an appraisal of the clinical accuracy of the simulation, assessed by 15 experts and dermatologists, and second, an appraisal of its feasibility, realism, and reception (like, dislike, contentment, scariness, etc.) by 246 women from three different countries, aged 20–60 y, considered faithful representants of consumers. The results of this pilot study are the objects of the present paper.

2. Materials and Methods

2.1. Full-Face Photographs

Fifty full-face standardized photographs of two cohorts of 25 healthy Chinese women and 25 French Caucasian women, aged 35–49 y, were taken in similar technical conditions in our local facilities (Shanghai, China, and Paris, France) using a Nexa® device (Canfield Beauty, Parsippany, NJ, USA).
All subjects were selected through two local agencies. In addition to the age range being the primary criterion of inclusion, an absence of any facial disorder (scars, acne, melasma, seborrheic dermatitis, telangiectasia, etc.) was the other criterion. All subjects were informed (in written and oral forms) of the objective of the study and signed an informed consent form. The latter mentioned that these photographs were taken under strict anonymous conditions (eyes shut), blind-coded, and erased post-study. These photos were referred to as T0, whereas those artificially modified were coded as T0 + 10 and T0 + 15. For the photographs, subjects were asked to adopt the most neutral expression, with closed eyes and visible, uncovered hair.

2.2. Part I: Facial Signs and Grading Scales Integrated in the AI-Based System

Facial aging signs 8 and 7, previously illustrated and graded in the skin aging atlases, were retained for integration in the AI-based algorithm designed for East Asian and European subjects, respectively, as shown in Table 1 and Table 2. Grading of forehead wrinkles was not included for European women as these were previously demonstrated to show erratic progress along the lifespan [13].

2.3. Simulation and Grading Algorithm

An AI-based algorithm for projective aging simulation comprised three building blocks:
(i) A convolutional neural network based on the ResNet (Residual Neuronal Network) architecture [18] trained to estimate clinical scores from a local image patch using the skin aging atlas scales as references [13,14,15]. For the seven or eight facial signs associated with each ethnicity, one dedicated model was used to estimate the grade from its corresponding area.
(ii) Once the scores were established, the second block, which was dedicated to aging knowledge tables, computed the progression rate of each clinical sign grade over 10 or 15 years for average facial aging. This method was previously described [16,17].
(iii) The last block of the AI-based system was a skin aging model derived from the Aging MapGAN [19] model. Using a modified StarGAN [20] architecture that performs image-to-image translation, each local crop was virtually “aged” to match the grade computed for either +10 or +15 years. To ensure the accuracy of these transformed crops, they were fed through the first block again to allow the output to reach the target ‘aged’ score.
Finally, each transformed crop was stitched back on the original full-face image (T0) using feathering to soften the edges and provide a more natural and realistic reconstruction.

2.4. Grading of Facial Skin Aging by the Expert Panel

Fifteen experts and dermatologists evaluated the facial signs presented in Table 1 and Table 2 using resized and reframed standardized pictures that were edited using Photoshop version 10®. This process ensured that only one facial area of interest was displayed on the screen without any interference from the other area. Each picture was presented to every expert in a random manner to eliminate any bias. To guarantee the robustness of this process, several pictures were presented twice during this evaluation process. The evaluation of the pictures was performed under standardized conditions of lighting, expert position, and equipment calibration. For a given facial sign on each subject, the score attributed was the average graded value of all fifteen experts.

2.5. Part II: Appraising Naïve Panel

A total of 246 women aged 20–60 years were recruited locally from three different countries (China, n = 80, France, n = 81, and Thailand n = 85) to make up a well-balanced consumer panel of different cultures and age distributions. The Thai population was representative of different ethnicities from Central and North-Eastern Thailand.
Inclusion criteria of the consumer panel included good visual acuity and ease in handling a computer. The subjects in this panel signed an informed consent form and were given instructions on the task to complete. During their visit to the three local facilities located in Shanghai, Paris and Bangkok, the consumer panel were tasked to fill a questionnaire (see Section 2.6) after visualizing the photographs presented in pairs. These pairs of photographs systematically included T0 and were presented randomly (e.g., T0 vs. T + 10 or T0 vs. T + 15) as shown in Figure 1.
The eyes visible on the photographs were first masked with a black rectangle to maintain anonymity. These were then projected on a calibrated screen in comparable conditions of lighting. The consumers were all seated at an equal distance from the screen, and to reduce ocular fatigue, all examinations were limited to 30 sessions followed by a 30-min resting period.
To ensure the robustness of this process, several pairs of photographs were presented twice at random to the panelists during the perception recording. For each of the three countries, the answers were pooled. The flow of the study is depicted in Figure 2.

2.6. Questionnaire

The questionnaire provided was split into three categories described below in order to assess the perception of the naïve consumer panel on the extrapolating procedure:
(i) Realism: “How realistic do you find the simulation?” with a 5-point scale: “not realistic at all”, “rather not realistic”, “I do not know”, “rather realistic”, “very realistic”.
(ii) Liking: “How much do you like this simulation?” with a 5-point scale: “no, not at all”, “no, rather not”, “I do not know”, “yes, rather”, “yes, a lot”.
(iii) Overall opinion: Gather a global perception and establish an improvements axis of the system using three questions:
First, “would you like to test this kind of artificial intelligence on yourself?” (answers being either yes; no; or I do not know).
Second, “Overall, do you find these simulations consistent?” (answers being yes, both simulations; yes, only the one that predicts 10 years later; yes, only the one that predicts 15 years later; or no, neither of the two simulations and why).
Third, “Does anything disturb you?” where the answers were either nothing disturbs me; wrinkles are not increased enough; wrinkles are not homogeneous on the whole face; pigmentary spots are not increased enough; the result is not natural; the result is scary; the sagging of the skin is not increased enough; changes in skin texture are not noticeable; changes in eye contour are not noticeable enough; hair color is unchanged; other, please specify).

2.7. Statistics

For the validation of the accuracy of the simulation, a statistical comparison of (T0 − T0 + 10 y) knowledge and (T0 − T0 + 10 y) simulation as well as (T0 − T0 + 15 y) knowledge and (T0 − T0 + 15 y) simulation in three 5-year age clusters was made with the statistical software Xlstat 2023 (AddinsoftTM, Bordeaux, France). For each facial sign averaged from the 15 experts’ grading, a comparison of both knowledge and simulation was performed using a Variance Analysis on the difference between T0 and T0 + 10 years or between T0 and T0 + 15 years (2 factors: Model/Age-cluster) − 5% level of risk.

3. Results

3.1. Robustness of Simulation

Table 3 and Table 4 show the results of the comparison between what is already known and published on the AI-based system in the image simulation (+10 years or +15 years) and what the fifteen experts graded. The comparison was conducted for both East Asian and European cohorts.
With the exception of ptosis of the lower face, the majority of the values graded on the skin aging atlases [13,14,15] yielded a non-significant p-value which supports the accuracy and robustness of this experience. When significant differences are noticed, the simulation grading is lower than order.

3.2. Perception of Simulation Experience

Positive answers were pooled from Table 5 and expressed as a rounded percentage (%). Specifically, rows 4 and 5 of Table 5 indicated that the photographic simulations were widely perceived as being realistic and liked by the three ethnic groups (more so by the Chinese panel) irrespective of the simulated age severity (T0 + 10 years or T0 + 15 years).
Interestingly, the consumer panelists found the realism to be similar for both simulated timelines (T0 + 10 years or T0 + 15 years). With regard to their acceptance level, all panelists and Chinese women in particular were agreeable to be personally tested for both simulated timelines of T0 + 10 years or T0 + 15 years, whereas 47% of Thai women were reluctant.
Evaluating the coherence/consistency of the simulation involved analyzing the responses expressed as percentages from the three panels to the “Does anything disturb you?” section of the questionnaire. Given that 80% of all consumer panelists found some disturbances, Table 6 reports some elements of the simulating process to which improvements could be brought. These improvements mostly consisted of more accentuated or homogeneous wrinkles, sagging and skin texture. However, this term is too global or vague and may be too redundant for wrinkling and sagging.
A striking difference between East Asian women and European women (31% and 38% vs. 9%, respectively) was found on pigmentary spots, likely explained by the fact that these are a much more specific trait of East Asian skin aging [21]. Similarly, the eye contour seems to be another element to which improvements should be made.
Interestingly, the simulation process appeared scary for a few subjects and rather natural. Thai subjects, however, differed from Chinese women on certain aspects, reporting the result as more natural but with less pronounced and more uneven wrinkles, particularly around the eye contour. Of note, Thai women clearly differ from the two other groups in their will to test this procedure on themselves despite their rather good appreciation of the realism of the process (Table 5).

4. Discussion

Virtually imaging the human face (or hair) to illustrate likely and provisory modifications is not a new procedure and is widely used in retail or beauty salons (make-up, hair dying, perms, etc.), supported by increasingly sophisticated technologies. The work presented here aimed at exploring a different aspect, less “neutral” in the simulation process as it simulated the long-term transformations naturally brought to the facial traits of a given individual. While other studies have previously investigated the broader use of AI in the image-based fields [22,23], this study remains, to our knowledge, the first attempt to project facial changes that will likely occur on an individual in the long term. This objective was grounded by the vast amount of data provided by the validated skin aging atlases established on thousands of subjects, of different genders, environments and ethnicities.
A previous study investigated how generated visuals of AI models represented women, with a focus on racial and body type inclusivity [24]. It explored two main themes: how women are depicted in AI-generated imagery using beauty-related prompts, and how women perceive these representations. It highlighted 42% of women’s concerns with underrepresentation across age groups and misrepresentation related to ethnicity [24]. The projections of facial changes likely to occur long-term in an individual depicted here were an unprecedented realistic approach as it included women of all ages but also included women of different nationalities. This marked a shift towards a more inclusive representation.
It was initially believed that the present “Dorian Gray” approach might be rejected by consumers for many intimate or cultural reasons, in brief being judged as too scary. However, this pilot study suggested that such is not the case. Besides its recognized inherent realism for a majority of subjects, the simulation process employed here was globally “liked” by more than two thirds of the appraisal panel, taken as a faithful representation of consumers.
The results obtained here should nonetheless be seen as qualitative and taken with caution, given the relatively low number of appraising subjects (n = 246) compared to the millions of inhabitants of the three countries. However, they remain valuable in guiding the improvements of the simulation process. To further add to the robustness of this study, more images of women representative of South-East Asia could be included in the analysis such as Indonesian women. Furthermore, the present work highlights the accuracy and robustness in the simulation process with respect to the input values provided to the AI-based system.
Despite its strengths and weaknesses, the goal of this simulation methodology was nevertheless well accepted by a majority of the appraising participants. This was likely due to mixed and intimate reasons such as curiosity, aesthetic exigence and/or cultural influence. The latter is partly evidenced by differences in appraisals recorded between Chinese and Thai women (wrinkled features, unnatural results, etc.). Despite this, both groups are often broadly categorized under the same ‘East Asian’ cluster, which in reality encompasses a diverse mosaic of cultural imprints. To summarize, from a technical viewpoint, the GAN approach used here has proven its efficacy along with ease of processing. As a preliminary step, the present work laid the groundwork for more complex studies incorporating larger sample sizes, additional ethnicities, gender diversity, etc., once the suggested improvements are implemented and validated.
We acknowledge that the approach adopted in this study goes beyond the superficial aspect of facial skin. It touches on a more complex and intimate self-appreciation of physical appearance and its broader social implications. As such, communicating the results requires strict adherence to ethical guidelines, with a clear explanation that this virtual extrapolation illustrates the most probable long-term changes and is by no means a certainty. It should be emphasized that lifestyle factors such as physical activity, sleep patterns, diet, smoking, cosmetic habits, sun exposure, and air pollution, collectively referred to as the exposome, have a strong bearing on the skin aging process [3,11,25,26,27].
The validated and robust GAN solution employed here plays a major role in the beauty and skincare field, benefiting consumers, dermatologists, surgeons, and beauticians. This is particularly relevant in these fields where the environmental and lifestyle choices have a slow and insidious impact on the skin. Therefore, building awareness and trust through more personalized diagnostics and services is crucial. In addition, emerging trends around longevity are shaping how to best prepare the skin of tomorrow and access potential simulated effects of formulae, experiences, or procedures.
This present work opens the door to a projective cosmetic concept that is worthy of being explored further as it carries a real, intimate and personalized characteristic. With regard to these latter aspects, this novel procedure demands full respect for confidentiality, along with appropriate communication tailored to a culturally diverse audience.
This study evaluated the accuracy of an AI-based facial aging simulation system in predicting long-term (10–15 years) facial aging signs at the individual level, as well as its acceptance by consumers. Standardized photographs were taken at baseline (T0) and assessed using ethnic-specific skin aging atlases to generate projected images at T10 and T15 through the AI system. Although some cultural variations were observed, this pilot study confirms overall female acceptance of their projected future appearance and underscores the potential of AI-driven tools towards more realistic, personalized, and durable cosmetic innovations.

Author Contributions

Conceptualization, F.F.; methodology, F.F.; software, J.D.; validation, F.F. and P.-A.B.; formal analysis, F.F., P.-A.B., F.W., P.T. and A.C.; investigation, F.F.; resources, F.F.; data curation, F.F., F.W., P.T. and A.C.; writing—original draft preparation, F.F.; writing—review and editing, F.F. and G.B.; visualization, F.F.; supervision, F.F.; project administration, F.F.; funding acquisition, F.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded entirely by L’Oréal group.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki. In accordance with Article R1121-1 of the French Public Health Code, ethics committee approval was not required as this study was limited to consumer perception and did not involve any intervention or testing on human subjects.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data that support the findings are available from the corresponding author upon reasonable request.

Acknowledgments

The authors want to thank Didier Saint-Leger, Anu Singhal, Cedric Larcher, Aurore Phithamethany, Sylvia Tournery, Gayane Azadiguian, Annie Black, Françoise Lehmann, Chengda Ye, Huixia Qiu, Roland Bazin, Matthieu Perrot, Sileye Ba, Ava Mondji, Lea Rousseau, Isabelle Castiel, Pascale Mora, Aurelie Abric and Thierry Lageat not only for their great help in the completion of this paper and global study but also for their strong support and enthusiasm.

Conflicts of Interest

F.F., P.-A.B., J.D., F.W., P.T., A.C. and G.B. are employees of the L’Oréal group. The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
GANGenerative Adversarial Network
ResNetResidual Neuronal Network

References

  1. Flament, F.; Jacquet, L.; Ye, C.; Amar, D.; Kerob, D.; Jiang, R.; Zhang, Y.; Kroely, C.; Delaunay, C.; Passeron, T. Artificial Intelligence analysis of over half a million European and Chinese women reveals striking differences in the facial skin ageing process. J. Eur. Acad. Dermatol. Venereol. 2022, 36, 1136–1142. [Google Scholar] [CrossRef] [PubMed]
  2. Krutmann, J.; Bouloc, A.; Sore, G.; Bernard, B.A.; Passeron, T. The skin aging exposome. J. Dermatol. Sci. 2017, 85, 152–161. [Google Scholar] [CrossRef] [PubMed]
  3. Gilchrest, B.A. Skin aging and photoaging: An overview. J. Am. Acad. Dermatol. 1989, 21, 610–613. [Google Scholar] [CrossRef] [PubMed]
  4. Levakov, A.; Vuckovic, N.; Doali, M.; Mocko-Kacanski, M.; Bozanic, S. Age-related skin changes. Med. Pregl. 2012, 65, 191–195. [Google Scholar] [CrossRef] [PubMed]
  5. Battie, C.; Jitsukawa, S.; Bernerd, F.; Del Bino, S.; Marionnet, C.; Verschoore, M. New insights in photoaging: UVA-induced damage and skin types. Exp. Dermatol. 2014, 23, 7–12. [Google Scholar] [CrossRef] [PubMed]
  6. Akiba, S.; Shinkura, R.; Miyamoto, K.; Hillebrand, G.; Yamaguchi, N.; Ichihashi, M. Influence of chronic UV exposure and lifestyle on facial skin photoaging—Results from a pilot study. J. Epidemiol. 1999, 9, S136–S142. [Google Scholar] [CrossRef] [PubMed]
  7. Zhao, P.; Zhu, X.; Liu, Y.; Wang, B.; Wang, C.; Burns, F.J. Solar ultraviolet radiation and skin damage: An epidemiological study among a Chinese population. Arch. Environ. Health 1998, 53, 405–409. [Google Scholar] [CrossRef] [PubMed]
  8. Eun, H.C. Cutaneous photodamage in Asians. J. Dermatol. 2001, 28, 614–620. [Google Scholar] [CrossRef] [PubMed]
  9. Chung, J.H. Photoaging in Asians. Photodermatol. Photoimmunol. Photomed. 2003, 19, 109–121. [Google Scholar] [CrossRef] [PubMed]
  10. Takahashi, Y.; Fukushima, Y.; Kondo, K.; Ichihashi, M. Facial skin photoaging and development of hyperpigmented spots from children to middle-aged Japanese women. Skin Res. Technol. 2017, 23, 613–618. [Google Scholar] [CrossRef] [PubMed]
  11. Randhawa, M.; Wang, S.; Leyden, J.J.; Cula, G.O.; Pagnoni, A.; Southall, M.D. Daily use of a facial broad-spectrum sunscreen over one year significantly improves clinical evaluation of photoaging. Dermatol. Surg. 2016, 42, 1354–1361. [Google Scholar] [CrossRef] [PubMed]
  12. Bernerd, F.; Passeron, T.; Castiel, I.; Marionnet, C. The damaging effects of long UVA (UVA1) rays: A major challenge to preserve skin health and integrity. Int. J. Mol. Sci. 2022, 23, 8243. [Google Scholar] [CrossRef] [PubMed]
  13. Bazin, R.; Doublet, E. Skin Aging Atlas. Volume 1, Caucasian Type; Editions Med’Com: Paris, France, 2007. [Google Scholar]
  14. Bazin, R.; Flament, F. Skin Aging Atlas. Volume 2, Asian Type; Editions Med’Com: Paris, France, 2010. [Google Scholar]
  15. Bazin, R.; Flament, F.; Giron, F. Skin Aging Atlas. Volume 3, Afro-American Type; Editions Med’Com: Paris, France, 2012. [Google Scholar]
  16. Flament, F.; Saint-Leger, D. Photoaging’s portrait: The road map towards its photoprotection. Int. J. Cosmet. Sci. 2023, 45, 33–44. [Google Scholar] [CrossRef] [PubMed]
  17. Flament, F.; Qiu, H.; Abric, A.; Charbonneau, A. Assessing changes in some facial signs of fatigue in Chinese women, induced by a single working day. Int. J. Cosmet. Sci. 2019, 41, 21–27. [Google Scholar] [CrossRef] [PubMed]
  18. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  19. Despois, J.; Flament, F.; Perrot, M. AgingMapGAN (AMGAN): High-resolution controllable face aging with spatially-aware conditional GANs. In Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, UK, 23–28 August 2020. [Google Scholar]
  20. Choi, Y.; Uh, Y.; Yoo, J.; Ha, J.-W. StarGAN v2: Diverse image synthesis for multiple domains. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19 June 2020; pp. 8185–8194. [Google Scholar]
  21. Nouveau-Richard, S.; Yang, Z.; Mac-Mary, S.; Li, L.; Bastien, P.; Tardy, I.; Bouillon, C.; Humbert, P.; De Lacharrière, O. Skin ageing: A comparison between Chinese and European populations—A pilot study. J. Dermatol. Sci. 2005, 40, 187–193. [Google Scholar] [CrossRef] [PubMed]
  22. Stacy, H.R. The Representation of Feminine Beauty in Generative Artificial Intelligence Models. Ph.D. Thesis, Murray State University, Murray, KY, USA, 2025. [Google Scholar]
  23. Islam, U.; Mehmood, G.; Al-Atawi, A.A.; Khan, F.; Alwageed, H.S.; Cascone, L. NeuroHealth Guardian: A Novel Hybrid Approach for Precision Brain Stroke Prediction and Healthcare Analytics. J. Neurosci. Methods 2024, 409, 110210. [Google Scholar] [CrossRef] [PubMed]
  24. Osama, M.; Khan, S.S.; Khan, S.; Ahmad, S.; Mehmood, G.; Ali, I. High-Quality Multi-Focus Image Fusion: A Comparative Analysis of DCT-Based Approaches with Their Variants. IECE J. Image Anal. Process. 2025, 1, 27–35. [Google Scholar]
  25. Passeron, T.; Krutmann, J.; Andersen, M.L.; Katta, R.; Zouboulis, C.C. Clinical and biological impact of the exposome on the skin. J. Eur. Acad. Dermatol. Venereol. 2020, 34 (Suppl. S4), 4–25. [Google Scholar] [CrossRef] [PubMed]
  26. Krutmann, J.; Schalka, S.; Watson, R.E.B.; Wei, L.; Morita, A. Daily photoprotection to prevent photoaging. Photodermatol. Photoimmunol. Photomed. 2021, 37, 482–489. [Google Scholar] [CrossRef] [PubMed]
  27. Hughes, M.C.; Williams, G.M.; Baker, P.; Green, A.C. Sunscreen and prevention of skin aging: A randomized trial. Ann. Intern. Med. 2013, 158, 781–790. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Examples of pairs presented for appraisal by the naïve panel. T0 vs. T0 + 10 y and T0 vs. T0 + 15 y for European (41 y) and East Asian (49 y) women (phototype III).
Figure 1. Examples of pairs presented for appraisal by the naïve panel. T0 vs. T0 + 10 y and T0 vs. T0 + 15 y for European (41 y) and East Asian (49 y) women (phototype III).
Cosmetics 12 00154 g001
Figure 2. A schematic illustration of the flow of the pilot study.
Figure 2. A schematic illustration of the flow of the pilot study.
Cosmetics 12 00154 g002
Table 1. The eight facial signs for East Asian women and their scale’s range used in the simulation.
Table 1. The eight facial signs for East Asian women and their scale’s range used in the simulation.
Clinical ClustersFacial SignsDefinition of Scored ObservationScaleVisual
Wrinkles/TextureForehead wrinklesDepth of the transverse wrinkles on the forehead.0–8Cosmetics 12 00154 i001
Wrinkles/TextureGlabellar wrinklesDepth of vertical wrinkles between eyebrows.0–6Cosmetics 12 00154 i002
Wrinkles/TextureInter-ocular wrinklesDepth of horizontal folds between inner eye corners.0–7Cosmetics 12 00154 i003
Wrinkles/TexturePeriorbital wrinkles Depth of folds at malar area below crow’s feet, eye orbit excepted.0–9Cosmetics 12 00154 i004
Wrinkles/TextureNasolabial foldDepth of the fold present between the base of the nose and lips.0–7Cosmetics 12 00154 i005
Pigmentation signsDensity of pigmentary spotsNumber of spots per area unit on the cheek.0–7Cosmetics 12 00154 i006
Vascular signsDiffused rednessDiffused redness and micro-vessels visible, especially on cheeks.0–4Cosmetics 12 00154 i007
Firmness/SaggingPtosis of lower part of the faceSagging severity of the lower parts of the chin.0–6Cosmetics 12 00154 i008
Table 2. Facial signs for European women and their scale’s range used in the simulation.
Table 2. Facial signs for European women and their scale’s range used in the simulation.
Clinical ClustersFacial SignsDefinition of Scored ObservationScaleVisual
Wrinkles/TextureGlabellar wrinklesDepth of vertical wrinkles between both eyebrows.0–5Cosmetics 12 00154 i009
Wrinkles/TexturePeriorbital wrinkles (upper cheek area)Depth of folds found at the malar zone below crow’s feet, eye orbit except.0–5Cosmetics 12 00154 i010
Wrinkles/TextureNasolabial foldDepth of the deepest fold present on the face between base of the nose and lips.0–5Cosmetics 12 00154 i011
Wrinkles/TextureMarionette linesDepth of the deepest fold at the corner of lips.0–6Cosmetics 12 00154 i012
Pigmentation SignsWhole-face pigmentationDensity of pigmentation disorders on all the face.0–5Cosmetics 12 00154 i013
Vascular SignsVascular disordersAll diffused redness and dilation of blood micro-vessels visible on the face.0–7Cosmetics 12 00154 i014
Firmness/SaggingPtosis of the lower part of the faceSagging severity of the lower part of the face on each side of the chin.0–5Cosmetics 12 00154 i015
Table 3. For East Asian women, the table shows the statistical comparison between the order simulated by the AI-based algorithm and experts’ grading on simulated images, i.e., if there are non-significant differences, the simulation is considered accurate and robust. Significant differences are represented by p < 0.05.
Table 3. For East Asian women, the table shows the statistical comparison between the order simulated by the AI-based algorithm and experts’ grading on simulated images, i.e., if there are non-significant differences, the simulation is considered accurate and robust. Significant differences are represented by p < 0.05.
Average AgingΔ T10-T0
(35–39 y)
Δ T15-T0
(35–39 y)
Δ T10-T0
(40–44 y)
Δ T15-T0
(40–44 y)
Δ T10-T0
(45–49 y)
Δ T15-T0
(45–49 y)
Nb Vol10109966
Forehead wrinkles scored0.651.320.861.410.81.08
Forehead wrinkles known0.851.450.901.351.051.50
p-values0.090.330.810.780.160.09
Glabellar wrinkles scored0.430.810.390.690.520.89
Glabellar wrinkles known0.380.680.400.650.550.80
p-value0.630.140.890.540.670.45
Inter-ocular wrinkles scored0.571.050.741.391.281.64
Inter-ocular wrinkles known0.611.110.701.301.101.70
p-value0.790.760.670.430.150.43
Nasolabial fold scored0.180.590.721.080.560.81
Nasolabial fold known0.320.620.550.950.701.10
p-value0.150.890.190.420.130.02
Periorbital wrinkles scored0.320.960.870.850.921.10
Periorbital wrinkles known0.300.600.590.930.640.98
p-value0.810.060.020.710.330.56
Density of spots scored0.200.320.390.800.620.58
Density of spots known0.110.290.280.580.480.78
p-value0.260.790.140.070.030.05
Diffused redness scored0.290.450.270.430.350.36
Diffused redness known0.410.670.280.460.440.62
p-value0.190.020.900.810.24<0.00
Ptosis scored0.060.230.080.180.040.31
Ptosis known0.350.60.340.420.330.41
p-value<0.00<0.00<0.00<0.01<0.000.38
Table 4. For European women, the table shows the statistical comparison between the order simulated by the AI-based algorithm and experts’ grading on simulated images, i.e., if there are non-significant differences, the simulation is considered accurate and robust. Significant differences are represented by p < 0.05.
Table 4. For European women, the table shows the statistical comparison between the order simulated by the AI-based algorithm and experts’ grading on simulated images, i.e., if there are non-significant differences, the simulation is considered accurate and robust. Significant differences are represented by p < 0.05.
Average AgingΔ T10-T0
(35–39 y)
Δ T15-T0
(35–39 y)
Δ T10-T0
(40–44 y)
Δ T15-T0
(40–44 y)
Δ T10-T0
(45–49 y)
Δ T15-T0
(45–49 y)
Nb Vol999977
Glabellar wrinkles scored0.550.60.380.420.120.31
Glabellar wrinkles known0.600.700.400.500.200.30
p-value0.640.340.760.250.150.92
Nasolabial fold scored0.560.560.310.360.270.40
Nasolabial fold known0.400.700.500.600.400.50
p-value0.130.30<0.01<0.010.190.55
Periorbital wrinkles scored0.550.750.410.710.931.12
Periorbital wrinkles known0.500.800.500.900.701.10
p-value0.550.660.510.210.100.86
Marionette lines scored0.601.150.370.850.600.65
Marionette lines known0.500.700.400.700.500.80
p-value0.510.010.730.040.540.38
Whole-face pigmentation scored0.570.640.530.480.200.26
Whole-face pigmentation known0.400.500.300.400.200.30
p-value0.240.350.160.440.990.75
Vascular disorders scored0.210.250.200.240.100.21
Vascular disorders known0.300.400.200.300.200.30
p-value0.110.040.990.430.210.40
Ptosis scored0.080.290.260.360.310.43
Ptosis known0.400.800.600.900.701.00
p-value<0.00<0.00<0.00<0.00<0.01<0.01
Table 5. The general perception of the appraisal consumer panel of the simulation according to countries. All criteria are expressed as percentages (%) in rounded figures.
Table 5. The general perception of the appraisal consumer panel of the simulation according to countries. All criteria are expressed as percentages (%) in rounded figures.
CountryFranceChinaThailand
Number of recordsn = 81n = 80n = 85
Simulation+10 y+15 y+10 y+15 y+10 y+15 y
Realism (% answers 4 + 5)66%64%72%78%64%68%
Liking (% answers 4 + 5)64%62%75%78%67%65%
Test on myself (% yes for both timings)85%96%53%
Table 6. List of the items of the resulting simulation judged as rather weak, expressed by the three appraising panels, in % (rounded figures).
Table 6. List of the items of the resulting simulation judged as rather weak, expressed by the three appraising panels, in % (rounded figures).
Questions/PanelsChineseFrenchThai
Nothing disturbs me20%20%16%
Wrinkles are not increased enough 30%27%46%
Wrinkles are not homogeneous on the whole face29%25%41%
Pigmentary spots are not increased enough 31%9%38%
The result is not natural6%12%38%
The result is scary14%4%9%
The sagging of the skin is not increased enough 53%37%47%
Changes in skin texture are not noticeable enough31%36%39%
Changes in eye contour are not noticeable enough33%22%19%
Hair color is unchanged19%22%26%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Flament, F.; Bokaris, P.-A.; Despois, J.; Woodland, F.; Chretien, A.; Tartrat, P.; Balooch, G. Ethnic Differences in Women’s Perception of Simulated Facial Aging over a 15-Year Horizon: A GAN-Based Model Approach. Cosmetics 2025, 12, 154. https://doi.org/10.3390/cosmetics12040154

AMA Style

Flament F, Bokaris P-A, Despois J, Woodland F, Chretien A, Tartrat P, Balooch G. Ethnic Differences in Women’s Perception of Simulated Facial Aging over a 15-Year Horizon: A GAN-Based Model Approach. Cosmetics. 2025; 12(4):154. https://doi.org/10.3390/cosmetics12040154

Chicago/Turabian Style

Flament, Frederic, Panagiotis-Alexandros Bokaris, Julien Despois, Frederic Woodland, Adrien Chretien, Paul Tartrat, and Guive Balooch. 2025. "Ethnic Differences in Women’s Perception of Simulated Facial Aging over a 15-Year Horizon: A GAN-Based Model Approach" Cosmetics 12, no. 4: 154. https://doi.org/10.3390/cosmetics12040154

APA Style

Flament, F., Bokaris, P.-A., Despois, J., Woodland, F., Chretien, A., Tartrat, P., & Balooch, G. (2025). Ethnic Differences in Women’s Perception of Simulated Facial Aging over a 15-Year Horizon: A GAN-Based Model Approach. Cosmetics, 12(4), 154. https://doi.org/10.3390/cosmetics12040154

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop