Next Article in Journal
Deriving a Dilution of Precision Indicator for GNSS Factor Graph Optimization Solutions
Previous Article in Journal
Enhanced Adaptive Wiener Filtering for Frequency-Varying Noise with Convolutional Neural Network-Based Feature Extraction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Machine Learning in Evolving Art Styles: A Study of Algorithmic Creativity †

Faculty of Engineering and Quantity Surveying, INTI International University, Persiaran Perdana BBN, Putra Nilai, Nilai 71800, Malaysia
Presented at the 2024 IEEE 6th Eurasia Conference on IoT, Communication and Engineering, Yunlin, Taiwan, 15–17 November 2024.
Eng. Proc. 2025, 92(1), 45; https://doi.org/10.3390/engproc2025092045
Published: 30 April 2025
(This article belongs to the Proceedings of 2024 IEEE 6th Eurasia Conference on IoT, Communication and Engineering)

Abstract

Machine learning (ML) has played an increasingly pivotal role in shaping and evolving artistic expression, leading to new forms of algorithmic creativity. In this study, we explore how ML models, particularly deep learning algorithms such as generative adversarial networks (GANs), have contributed to evolving art styles by learning from vast datasets of historical and contemporary artworks. These algorithms mimic artistic techniques, generate new styles, and even create novel art forms that blend or deviate from traditional artistic boundaries. The challenges of algorithmic creativity, such as concerns about authorship, originality, and the potential loss of human touch in art are also highlighted. The role of machine learning in art raises important philosophical and ethical questions about the nature of creativity and the evolving relationship between human artists and machines. Machine learning has become a powerful tool in expanding the possibilities of artistic expression. While AI-generated art challenges traditional notions of creativity, it also opens up new horizons for collaboration and innovation in art, potentially leading to entirely new art styles in the digital age.

1. Introduction

Art has always been influenced by technological advances, and machine learning represents one of the most groundbreaking tools for artists and technologists alike. With techniques such as Generative Adversarial Networks (GANs) and neural style transfer, machine learning has provided the ability to emulate, evolve, and even originate new art styles that challenge traditional norms of creativity [1].
Machine learning has introduced new dimensions to the world of art, pushing the boundaries of creativity through algorithmic methods [2]. We investigate how machine learning, particularly deep learning techniques, is used to generate evolving art styles, examining both its technical aspects and the broader implications for creativity and art. We also explore the performance of different models, compare traditional and machine-generated art, and provide visual insights into the transformation of styles. Additionally, diagrams and graphs are provided to illustrate algorithmic processes and performance comparisons.
This study aims to review the use of machine learning models in evolving art styles and explore the technological aspects of these models and their implications on creative processes. Specifically, we focus on the contribution of algorithmic creativity—art generated by machines that learn, adapt, and produce novel styles. Figure 1 demonstrates a blend of classical Renaissance-style figures and modern abstract elements, suggesting an algorithmic creative process.

2. Literature Review

The concept of using computers for creative purposes began in the 1960s when early pioneers started using computers to generate visual patterns and simple forms of art. Harold Cohen, an influential figure in this space, developed AARON software, one of the first AI programs capable of creating abstract artworks autonomously. AARON used a rule-based approach to produce art, marking an important step towards algorithmic art.
During this period, the focus was primarily placed on developing basic geometric compositions, often in black and white, with simple shapes and line patterns that were visually intriguing but far from the complex, dynamic art generated by modern algorithms. By the 1990s, advancements in neural networks paved the way for more sophisticated forms of computer-generated art. Artificial neural networks (ANNs) provided a means for computers to learn from data, a concept that would later be instrumental in artistic applications.
However, the breakthrough came in the 2010s with the introduction of convolutional neural networks (CNNs) and deep learning techniques [3]. These advances allowed computers to understand intricate visual elements and patterns, enabling the development of complex art pieces that reflected a combination of content and style. The advent of neural style transfer (NST), introduced by Gatys et al. in 2015 [2], marked a significant milestone in machine learning’s impact on art. NST uses pre-trained CNNs to blend the content of one image with the style of another, generating a third image that captures the content and stylistic elements of both.
The method minimizes a combined loss function that accounts for both content similarity and style similarity. Content loss measures the difference between the generated image and the content image, while style loss compares the differences between the style features using Gram matrices.
NST has enabled artists to combine classical and modern styles, experiment with new aesthetics, and generate unique interpretations of well-known artworks. This technique gained rapid popularity due to its intuitive approach as it allows any individual to create art by selecting a photo and an artwork as inputs. NST has made algorithmic art more accessible to non-experts, democratizing the creative process.
Another breakthrough in the intersection of machine learning and art came in 2014 when Ian Goodfellow [1] introduced GANs. GANs consist of two models—a generator and a discriminator—that work against each other to create realistic images. The generator tries to produce images that mimic real art, while the discriminator tries to identify whether the images are real or generated. GANs have been instrumental in creating diverse styles of art. For example, StyleGAN and ArtGAN have allowed for the generation of portraits, landscapes, and abstract art that blend elements from various artistic movements. GANs are particularly powerful because they generate entirely new images rather than merely transforming existing ones [4]. This capability has made GANs a popular choice for generating novel and imaginative artworks that challenge conventional artistic norms.
GANs have also inspired artists to collaborate with algorithms in co-creating artworks that merge human creativity with machine-generated features. Such collaborations raise questions about the nature of creativity and who should be considered the author of the final piece.
DeepDream, developed by Google in 2015, is another notable technique that uses CNNs to enhance features found in images [3]. By applying gradient ascent to amplify patterns, DeepDream produces visually rich, dream-like imagery. This approach highlights the inner workings of neural networks by exaggerating the features that the model perceives, resulting in highly surreal visuals. DeepDream was the first tool that gave users an insight into the features neural networks learned during training. It opened up opportunities for artists to visualize and creatively exploit the hallucinations of deep learning models.

3. Methodology

3.1. Machine Learning Models in Artistic Creation

3.1.1. NST

NST uses CNNs to combine the content of one image with the style of another, creating a novel piece of art. The method involves a content image (often a photo) and a style image (typically a famous artwork), and a resulting output that blends the two. NST defines content loss and style loss and the goal of NST is to minimize a loss function composed of two components: content loss (Lcontent) and style loss (Lstyle). These losses are minimized through backpropagation. Content loss measures the difference between the feature representations of the content image and the generated image. Mathematically, it is given by
L content   ( p ,   x ) = i 1 N 1 2 j ( F i j x F i j p ) 2
where F i j x and F i j p are the feature representations of the generated image and the content image at layer i, respectively.
Style Loss measures the similarity between the style of the generated image and the style image by comparing the Gram matrices of their feature maps.
L style   ( a ,   x ) = i 1 N 1 4 N i 2 M i 2 j , k ( G i j x G i j a ) 2
where G i j x and G i j a are the Gram matrices of the generated and style images.
The total loss (Ltotal) is given by
Ltotal = αLcontent + β Lstyle
where α and β are weights that control the importance of content and style, respectively.
Figure 2 describes the transformation process, showing the content image, the style image, and the generated artwork that combines both elements, with arrows indicating the flow of the process.

3.1.2. GANs

GANs were introduced by Ian Goodfellow in 2014 [1] and consist of two neural networks—generator and discriminator—which are trained adversarially. GANs have been instrumental in the generation of new art styles by learning from existing datasets of artworks.
The generator creates new images based on random noise, while the discriminator assesses whether the images are real or fake. The generator is trained to produce images that are indistinguishable from real artworks. GANs are well-known for their ability to create highly realistic and imaginative artworks as shown in Figure 3.
The training process of GANs is formulated as a min-max optimization problem.
min G max D V ( D ,   G ) = E x ~ P d a t a ( x ) [ log   D ( x ) ] + E z ~ P z ( z ) [ log   ( 1     D   ( G ( z ) ) ) ]
where G represents the generator, which creates new images from random noise z, and D represents the discriminator, which outputs the probability that a given image is real. The generator attempts to minimize log(1 − D(G(z))), while the discriminator tries to maximize it.

3.1.3. DeepDream and Algorithmic Enhancement

DeepDream, developed by Google, enhances and overinterprets images using convolutional neural networks, creating dream-like visuals by accentuating features detected by the model [3]. The model applies feature visualization techniques to enhance edges, patterns, and colors, producing images that often resemble psychedelic versions of the original.

4. Performance Comparisons: Models and Art Style Evolution

To evaluate the impact and quality of different machine learning approaches on art creation, we compare models including NST, GANs, and DeepDream. The performance is assessed based on three key parameters: quality of output, computation time, and creative adaptability.

4.1. Quality of Output

NST tends to produce coherent images that effectively blend content and style, making it appropriate for generating a specific artistic interpretation of a given image. GANs, in contrast, have the advantage of creating entirely new compositions, often yielding more imaginative and visually appealing outputs. Figure 4 shows the quality score comparison of different machine learning models in art generation. This graph compares the quality scores of NST, GANs, and DeepDream on a scale of 1 to 10, highlighting their relative strengths in generating artistic outputs.

4.2. Computation Time

NST is faster than GANs, particularly when applied to images with a pre-trained model. GANs, while more computationally intensive, have shown better quality in generating diverse and unique outputs.

4.3. Creative Adaptability

GANs exhibit high adaptability, capable of learning from diverse datasets and producing novel styles that do not directly replicate the training examples. DeepDream provides limited adaptability but is highly effective at generating enhanced, surreal visuals from existing content.

4.4. Algorithmic Creativity: Impact on Artistic Practices

4.4.1. Expanding Definition of Art

The use of machine learning has sparked discussions about what qualifies as art and who (or what) must be considered the “creator”. Algorithmic art generated by models such as GANs challenges traditional notions of authorship, as the creative process is shared between the human and the machine as shown in Figure 5.

4.4.2. Collaboration Between Artists and Algorithms

Artists have begun collaborating with algorithms to produce artworks that neither humans nor machines can create independently. Machine learning models provide a new palette for artists, enabling them to experiment with styles that are beyond their usual capabilities [4].

4.4.3. Democratization of Art Creation

Machine learning models have made art creation accessible to a wider audience, allowing individuals without formal training in painting or drawing to create complex artworks as shown in Figure 6. Tools including DeepArt and Runway ML offer interfaces that allow users to generate artworks simply by uploading images and choosing desired styles.
Figure 7 represents the transformation from an original image through blending with a style image to the final output, showcasing the different stages of applying an art style using AI. Figure 8 depicts the iterative and collaborative process between an AI model and a human artist, emphasizing their partnership in refining and producing creative artwork [5].

5. Challenges and Future Directions

Training GANs requires significant computational power and resources, which can limit accessibility for independent artists. Models trained on biased datasets may produce biased or unoriginal outputs, limiting the diversity of the generated artworks [6]. The use of machine learning for art creation raises ethical concerns about authorship and ownership [7]. If an algorithm produces a unique work of art, questions arise regarding who owns the rights to that artwork, the programmer, the user, or the model [8]. As AI models evolve, there must be an increase in human-machine collaborations, where artists use models not just as tools, but as creative partners [9]. Future models are needed to explore evolving styles in real time, adapting to audience feedback and generating art that is continually changing [10].
Machine learning (ML) models heavily rely on training data. In the context of evolving art styles, datasets are often biased toward popular or historically well-documented styles (e.g., Renaissance, Impressionism), while niche, indigenous, or contemporary movements are underrepresented. Models may reinforce dominant aesthetics while marginalizing less mainstream styles, limiting true exploration of stylistic diversity. StyleGAN trained on Western classical art often fails to adequately reproduce indigenous Aboriginal or contemporary urban art without extensive retraining [11].
Creativity and artistic value are inherently subjective, varying greatly across cultures, contexts, and individual perceptions. There is no universally accepted metric for evaluating "creative success" in machine-generated art. Most current evaluation methods (e.g., Fréchet Inception Distance - FID, or human surveys) fail to capture the nuanced emotional and cultural impacts of art. An ML model might produce visually complex outputs but fail to evoke the intended emotional resonance in different audience groups [12].
Most machine learning models, particularly those based on supervised or unsupervised learning (e.g., CNNs, GANs), excel at imitation rather than true innovation. ML-generated art often recombines existing elements rather than producing genuinely novel styles or paradigms. Deep creativity, such as the founding of Cubism or Surrealism, is still beyond current algorithmic capabilities. Despite variations, outputs of neural style transfer techniques often appear as interpolations between known styles, lacking disruptive originality [13].
Models trained extensively on specific art datasets risk overfitting, where the generative outputs converge narrowly around particular style parameters. Overfitting reduces model generalization, leading to repetitive and predictable artworks that fail to explore new stylistic territories [14]. A model trained mainly on Van Gogh paintings might consistently output swirling brushstroke textures even when tasked to generate a "new style."
Training datasets often include copyrighted artworks without explicit permission from the artists. This raises serious legal and ethical concerns about ownership, credit attribution, and the right to remix or monetize derivative works. Several lawsuits have been filed against AI companies for using copyrighted art without licenses in training generative models.

6. Conclusions

Machine learning has reshaped the landscape of art creation, enabling new forms of creativity that blend algorithmic capabilities with human intent. NST, GANs, and DeepDream are the most notable techniques that have influenced modern artistic practices, offering a fresh perspective on style, form, and the role of technology in art. While these technologies present numerous opportunities, they also challenge the traditional understanding of authorship, creativity, and the nature of art. As machine learning continues to evolve, the boundary between artist and algorithm becomes increasingly blurred, opening up new possibilities for what art can be and who or what can create it.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author due to privacy.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Networks. Commun. ACM. 2025, 63, 139–144. [Google Scholar] [CrossRef]
  2. Gatys, L.A.; Ecker, A.S.; Bethge, M.A. Neural Algorithm of Artistic Style. arXiv 2015, arXiv:1508.06576. [Google Scholar] [CrossRef]
  3. Al-Khazraji, L.R.; Abbas, A.R.; Jamil, A.S.; Hussain, A.J. A Hybrid Artistic Model Using Deepy-Dream Model and Multiple Convolutional Neural Networks Architectures. IEEE Access. 2023, 11, 101443–101459. [Google Scholar] [CrossRef]
  4. Elgammal, A.; Liu, B.; Elhoseiny, M.; Mazzone, M. CAN: Creative Adversarial Networks, Generating “Art” by Learning About Styles and Deviating from Style Norms. arXiv 2017, arXiv:1706.07068. [Google Scholar]
  5. McCormack, J.; Gifford, T.; Hutchings, P. Autonomy, Authenticity, Authorship and Intention in Computer Generated Art. In Proceedings of the EvoMUSART: International Conference on Computational Intelligence in Music, Sound, Art and Design (Part of EvoStar), Cham, Switzerland, 24–26 April 2019; pp. 35–50. [Google Scholar]
  6. Leong, W.Y. AI-Generated Artwork as a Modern Interpretation of Historical Paintings. Int. J. Soc. Sci. Artist. Innov. 2025, 5, 15–19. [Google Scholar]
  7. Leong, W.Y. AI-Powered Color Restoration of Faded Historical Painting. In Proceedings of the 10th International Conference on Digital Arts, Media and Technology (DAMT) and The 8th ECTI Northern Section Conference on Electrical, Electronics, Computer and Telecommunications Engineering (NCON), Nan, Thailand, 29 January–1 February 2025; pp. 623–627. [Google Scholar]
  8. Shao, L.J.; Chen, B.S.; Zhang, Z.Q.; Zhang, Z.; Chen, X.R. Artificial intelligence generated content (AIGC) in medicine: A narrative review. Math. Biosci. Eng. 2024, 2, 1672–1711. [Google Scholar] [CrossRef] [PubMed]
  9. Leong, W.Y.; Zhang, J.B. AI on Academic Integrity and Plagiarism Detection. ASM Sci. J. 2025, 20, 75. [Google Scholar] [CrossRef]
  10. Leong, W.Y.; Zhang, J.B. Ethical Design of AI for Education and Learning Systems. ASM Sci. J. 2025, 20, 1–9. [Google Scholar] [CrossRef]
  11. Lou, Y.Q. Human Creativity in the AIGC Era. J. Des. Econ. Innov. 2023, 9, 541–552. [Google Scholar] [CrossRef]
  12. Guo, D.H.; Chen, H.X.; Wu, R.L.; Wang, Y.G. AIGC challenges and opportunities related to public safety: A case study of ChatGPT. J. Saf. Sci. Resil. 2023, 4, 329–339. [Google Scholar] [CrossRef]
  13. Cheng, M. The Creativity of Artificial Intelligence in Art. Proceedings 2022, 81, 110. [Google Scholar] [CrossRef]
  14. Oksanen, A.; Cvetkovic, A.; Akin, N.; Latikka, R.; Bergdahl, J.; Chen, Y.; Savela, N. Artificial intelligence in fine arts: A systematic review of empirical research. Comput. Hum. Behav. Artif. Hum. 2023, 1, 100004. [Google Scholar] [CrossRef]
Figure 1. Algorithmically generated artwork combining elements of classical and modern art.
Figure 1. Algorithmically generated artwork combining elements of classical and modern art.
Engproc 92 00045 g001
Figure 2. Illustration of NST process.
Figure 2. Illustration of NST process.
Engproc 92 00045 g002
Figure 3. Workflow diagram of GANs in art creation.
Figure 3. Workflow diagram of GANs in art creation.
Engproc 92 00045 g003
Figure 4. Quality score comparison of different machine learning models in art generation.
Figure 4. Quality score comparison of different machine learning models in art generation.
Engproc 92 00045 g004
Figure 5. Comparison between human-generated and machine-generated abstract art.
Figure 5. Comparison between human-generated and machine-generated abstract art.
Engproc 92 00045 g005
Figure 6. Adoption of machine learning in art by professional vs. non-professional artists.
Figure 6. Adoption of machine learning in art by professional vs. non-professional artists.
Engproc 92 00045 g006
Figure 7. AI art style transformation steps.
Figure 7. AI art style transformation steps.
Engproc 92 00045 g007
Figure 8. Future workflow for AI-Augmented collaborative art creation.
Figure 8. Future workflow for AI-Augmented collaborative art creation.
Engproc 92 00045 g008
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Leong, W.Y. Machine Learning in Evolving Art Styles: A Study of Algorithmic Creativity. Eng. Proc. 2025, 92, 45. https://doi.org/10.3390/engproc2025092045

AMA Style

Leong WY. Machine Learning in Evolving Art Styles: A Study of Algorithmic Creativity. Engineering Proceedings. 2025; 92(1):45. https://doi.org/10.3390/engproc2025092045

Chicago/Turabian Style

Leong, Wai Yie. 2025. "Machine Learning in Evolving Art Styles: A Study of Algorithmic Creativity" Engineering Proceedings 92, no. 1: 45. https://doi.org/10.3390/engproc2025092045

APA Style

Leong, W. Y. (2025). Machine Learning in Evolving Art Styles: A Study of Algorithmic Creativity. Engineering Proceedings, 92(1), 45. https://doi.org/10.3390/engproc2025092045

Article Metrics

Back to TopTop