Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (60)

Search Parameters:
Keywords = artistic transfer

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 15241 KiB  
Article
Diffusion Model-Based Cartoon Style Transfer for Real-World 3D Scenes
by Yuhang Chen, Haoran Zhou, Jing Chen, Nai Yang, Jing Zhao and Yi Chao
ISPRS Int. J. Geo-Inf. 2025, 14(8), 303; https://doi.org/10.3390/ijgi14080303 - 4 Aug 2025
Abstract
Traditional map style transfer methods are mostly based on GAN, which are either overly artistic at the expense of conveying information, or insufficiently aesthetic by simply changing the color scheme of the map image. These methods often struggle to balance style transfer with [...] Read more.
Traditional map style transfer methods are mostly based on GAN, which are either overly artistic at the expense of conveying information, or insufficiently aesthetic by simply changing the color scheme of the map image. These methods often struggle to balance style transfer with semantic preservation and lack consistency in their transfer effects. In recent years, diffusion models have made significant progress in the field of image processing and have shown great potential in image-style transfer tasks. Inspired by these advances, this paper presents a method for transferring real-world 3D scenes to a cartoon style without the need for additional input condition guidance. The method combines pre-trained LDM with LoRA models to achieve stable and high-quality style infusion. By integrating DDIM Inversion, ControlNet, and MultiDiffusion strategies, it achieves the cartoon style transfer of real-world 3D scenes through initial noise control, detail redrawing, and global coordination. Qualitative and quantitative analyses, as well as user studies, indicate that our method effectively injects a cartoon style while preserving the semantic content of the real-world 3D scene, maintaining a high degree of consistency in style transfer. This paper offers a new perspective for map style transfer. Full article
Show Figures

Figure 1

13 pages, 7106 KiB  
Article
Multi-Scale Universal Style-Transfer Network Based on Diffusion Model
by Na Su, Jingtao Wang and Yun Pan
Algorithms 2025, 18(8), 481; https://doi.org/10.3390/a18080481 - 4 Aug 2025
Abstract
Artistic style transfer aims to transfer the style of an artwork to a photograph while maintaining its original overall content. Although current style-transfer methods have achieved promising results when processing photorealistic images, they often struggle with brushstroke preservation in artworks, especially in styles [...] Read more.
Artistic style transfer aims to transfer the style of an artwork to a photograph while maintaining its original overall content. Although current style-transfer methods have achieved promising results when processing photorealistic images, they often struggle with brushstroke preservation in artworks, especially in styles such as oil painting and pointillism. In such cases, the extracted style and content features tend to include redundant information, leading to issues such as blurred edges and a loss of fine details in the transferred images. To address this problem, this paper proposes a multi-scale general style-transfer network based on diffusion models. The proposed network consists of a coarse style-transfer module and a refined style-transfer module. First, the coarse style-transfer module is designed to perform mainstream style-transfer tasks more efficiently by operating on downsampled images, enabling faster processing with satisfactory results. Next, to further enhance edge fidelity, a refined style-transfer module is introduced. This module utilizes a segmentation component to generate a mask of the main subject in the image and performs edge-aware refinement. This enhances the fusion between the subject’s edges and the target style while preserving more detailed features. To improve overall image quality and better integrate the style along the content boundaries, the output from the coarse module is upsampled by a factor of two and combined with the subject mask. With the assistance of ControlNet and Stable Diffusion, the model performs content-aware edge redrawing to enhance the overall visual quality of the stylized image. Compared with state-of-the-art style-transfer methods, the proposed model preserves more edge details and achieves more natural fusion between style and content. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

21 pages, 7084 KiB  
Article
Chinese Paper-Cutting Style Transfer via Vision Transformer
by Chao Wu, Yao Ren, Yuying Zhou, Ming Lou and Qing Zhang
Entropy 2025, 27(7), 754; https://doi.org/10.3390/e27070754 - 15 Jul 2025
Viewed by 336
Abstract
Style transfer technology has seen substantial attention in image synthesis, notably in applications like oil painting, digital printing, and Chinese landscape painting. However, it is often difficult to generate migrated images that retain the essence of paper-cutting art and have strong visual appeal [...] Read more.
Style transfer technology has seen substantial attention in image synthesis, notably in applications like oil painting, digital printing, and Chinese landscape painting. However, it is often difficult to generate migrated images that retain the essence of paper-cutting art and have strong visual appeal when trying to apply the unique style of Chinese paper-cutting art to style transfer. Therefore, this paper proposes a new method for Chinese paper-cutting style transformation based on the Transformer, aiming at realizing the efficient transformation of Chinese paper-cutting art styles. Specifically, the network consists of a frequency-domain mixture block and a multi-level feature contrastive learning module. The frequency-domain mixture block explores spatial and frequency-domain interaction information, integrates multiple attention windows along with frequency-domain features, preserves critical details, and enhances the effectiveness of style conversion. To further embody the symmetrical structures and hollowed hierarchical patterns intrinsic to Chinese paper-cutting, the multi-level feature contrastive learning module is designed based on a contrastive learning strategy. This module maximizes mutual information between multi-level transferred features and content features, improves the consistency of representations across different layers, and thus accentuates the unique symmetrical aesthetics and artistic expression of paper-cutting. Extensive experimental results demonstrate that the proposed method outperforms existing state-of-the-art approaches in both qualitative and quantitative evaluations. Additionally, we created a Chinese paper-cutting dataset that, although modest in size, represents an important initial step towards enriching existing resources. This dataset provides valuable training data and a reference benchmark for future research in this field. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

20 pages, 24813 KiB  
Article
BrushGaussian: Brushstroke-Based Stylization for 3D Gaussian Splatting
by Zhi-Zheng Xiang, Chun Xie and Itaru Kitahara
Appl. Sci. 2025, 15(12), 6881; https://doi.org/10.3390/app15126881 - 18 Jun 2025
Viewed by 524
Abstract
We present a method for enhancing 3D Gaussian Splatting primitives with brushstroke-aware stylization. Previous approaches to 3D style transfer are typically limited to color or texture modifications, lacking an understanding of artistic shape deformation. In contrast, we focus on individual 3D Gaussian primitives, [...] Read more.
We present a method for enhancing 3D Gaussian Splatting primitives with brushstroke-aware stylization. Previous approaches to 3D style transfer are typically limited to color or texture modifications, lacking an understanding of artistic shape deformation. In contrast, we focus on individual 3D Gaussian primitives, exploring their potential to enable style transfer that incorporates both color- and brushstroke-inspired local geometric stylization. Specifically, we introduce additional texture features for each Gaussian primitive and apply a texture mapping technique to achieve brushstroke-like geometric effects in a rendered scene. Furthermore, we propose an unsupervised clustering algorithm to efficiently prune redundant Gaussians, ensuring that our method seamlessly integrates with existing 3D Gaussian Splatting pipelines. Extensive evaluations demonstrate that our approach outperforms existing baselines by producing brushstroke-aware artistic renderings with richer geometric expressiveness and enhanced visual appeal. Full article
(This article belongs to the Special Issue Technical Advances in 3D Reconstruction)
Show Figures

Figure 1

16 pages, 3824 KiB  
Article
Style Transfer and Topological Feature Analysis of Text-Based CAPTCHA via Generative Adversarial Networks
by Tao Xue, Zixuan Guo, Zehang Yin and Yu Rong
Mathematics 2025, 13(11), 1861; https://doi.org/10.3390/math13111861 - 2 Jun 2025
Viewed by 445
Abstract
The design and cracking of text-based CAPTCHAs are important topics in computer security. This study proposes a method for the style transfer of text-based CAPTCHAs using Generative Adversarial Networks (GANs). First, a curated dataset was used, combining a text-based CAPTCHA library and image [...] Read more.
The design and cracking of text-based CAPTCHAs are important topics in computer security. This study proposes a method for the style transfer of text-based CAPTCHAs using Generative Adversarial Networks (GANs). First, a curated dataset was used, combining a text-based CAPTCHA library and image collections from four artistic styles—Van Gogh, Monet, Cézanne, and Ukiyo-e—which were used to generate style-based text CAPTCHA samples. Subsequently, a universal style transfer model, along with trained CycleGAN models for both single- and double-style transfers, were employed to generate style-enhanced text-based CAPTCHAs. Traditional methods for evaluating the anti-recognition capability of text-based CAPTCHAs primarily focus on recognition success rates. This study introduces topological feature analysis as a new method for evaluating text-based CAPTCHAs. Initially, the recognition success rates of the three methods across four styles were evaluated using Muggle-OCR. Subsequently, the graph diameter was employed to quantify the differences between text-based CAPTCHA images before and after style transfer. The experimental results demonstrate that the recognition rates of style-enhanced text-based CAPTCHAs are consistently lower than those of the original CAPTCHA, suggesting that style transfer enhances anti-recognition capability. Topological feature analysis indicates that style transfer results in a more compact topological structure, further validating the effectiveness of the GAN-based twice-transfer method in enhancing CAPTCHA complexity and anti-recognition capability. Full article
Show Figures

Figure 1

17 pages, 6326 KiB  
Article
Meta Network for Flow-Based Image Style Transfer
by Yihjia Tsai, Hsiau-Wen Lin, Chii-Jen Chen, Hwei-Jen Lin and Chen-Hsiang Yu
Electronics 2025, 14(10), 2035; https://doi.org/10.3390/electronics14102035 - 16 May 2025
Viewed by 421
Abstract
A style transfer aims to produce synthesized images that retain the content of one image while adopting the artistic style of another. Traditional style transfer methods often require training separate transformation networks for each new style, limiting their adaptability and scalability. To address [...] Read more.
A style transfer aims to produce synthesized images that retain the content of one image while adopting the artistic style of another. Traditional style transfer methods often require training separate transformation networks for each new style, limiting their adaptability and scalability. To address this challenge, we propose a flow-based image style transfer framework that integrates Randomized Hierarchy Flow (RH Flow) and a meta network for adaptive parameter generation. The meta network dynamically produces the RH Flow parameters conditioned on the style image, enabling efficient and flexible style adaptation without retraining for new styles. RH Flow enhances feature interaction by introducing a random permutation of the feature sub-blocks before hierarchical coupling, promoting diverse and expressive stylization while preserving the content structure. Our experimental results demonstrate that Meta FIST achieves superior content retention, style fidelity, and adaptability compared to existing approaches. Full article
Show Figures

Figure 1

63 pages, 22670 KiB  
Review
Style Transfer Review: Traditional Machine Learning to Deep Learning
by Yao Xu, Min Xia, Kai Hu, Siyi Zhou and Liguo Weng
Information 2025, 16(2), 157; https://doi.org/10.3390/info16020157 - 19 Feb 2025
Cited by 1 | Viewed by 12414
Abstract
Style transfer is a technique that learns style features from different domains and applies these features to other images. It can not only play a role in the field of artistic creation but also has important significance in image processing, video processing, and [...] Read more.
Style transfer is a technique that learns style features from different domains and applies these features to other images. It can not only play a role in the field of artistic creation but also has important significance in image processing, video processing, and other fields. However, at present, style transfer still faces some challenges, such as the balance between style and content, the model generalization ability, and diversity. This article first introduces the origin and development process of style transfer and provides a brief overview of existing methods. Next, this article explores research work related to style transfer, introduces some metrics used to evaluate the effect of style transfer, and summarizes datasets. Subsequently, this article focuses on the application of the currently popular deep learning technology for style transfer and also mentions the application of style transfer in video. Finally, the article discusses possible future directions for this field. Full article
(This article belongs to the Special Issue Surveys in Information Systems and Applications)
Show Figures

Figure 1

33 pages, 49068 KiB  
Article
The Interlaced Arches and the So-Called sebka Decoration: Origin and Materialisation in al-Andalus and Its Reinterpretation in Medieval Castile
by Ignacio González Cavero
Arts 2025, 14(1), 16; https://doi.org/10.3390/arts14010016 - 10 Feb 2025
Viewed by 2053
Abstract
In this article, I aim to address one of the most characteristic decorative elements of the Almohad period, the so-called sebka decoration. With this aim in mind and through the research carried out and the examples that have been preserved, I consider it [...] Read more.
In this article, I aim to address one of the most characteristic decorative elements of the Almohad period, the so-called sebka decoration. With this aim in mind and through the research carried out and the examples that have been preserved, I consider it appropriate to know the origin of this ornamental motif that is so recurrent in the Andalusian architectural panorama and to analyse not only its compositional scheme but also the different formal variants that arose around it. Furthermore, its use in other buildings in the Kingdom of Castile is a further indication that allows us to approach a scenario where cultural and artistic transfer between al-Andalus and the Christian territories was a reality. Full article
(This article belongs to the Special Issue Islamic Art and Architecture in Europe)
Show Figures

Figure 1

19 pages, 2825 KiB  
Article
Style Transfer of Chinese Wuhu Iron Paintings Using Hierarchical Visual Transformer
by Yuying Zhou, Yao Ren, Chao Wu and Minglong Xue
Sensors 2024, 24(24), 8103; https://doi.org/10.3390/s24248103 - 19 Dec 2024
Viewed by 1030
Abstract
Within the domain of traditional art, Chinese Wuhu Iron Painting distinguishes itself through its distinctive craftsmanship, aesthetic expressiveness, and choice of materials, presenting a formidable challenge in the arena of stylistic transformation. This paper introduces an innovative Hierarchical Visual Transformer (HVT) framework aimed [...] Read more.
Within the domain of traditional art, Chinese Wuhu Iron Painting distinguishes itself through its distinctive craftsmanship, aesthetic expressiveness, and choice of materials, presenting a formidable challenge in the arena of stylistic transformation. This paper introduces an innovative Hierarchical Visual Transformer (HVT) framework aimed at achieving effectiveness and precision in the style transfer of Wuhu Iron Paintings. The study begins with an in-depth analysis of the artistic style of Wuhu Iron Paintings, extracting key stylistic elements that meet technical requirements for style conversion. Furthermore, in response to the unique artistic characteristics of Wuhu Iron Paintings, this research constructs a multi-layered network structure capable of effectively capturing and parsing style and content features. Building on this, we have designed an Efficient Local Attention Decoder (ELA-Decoder) that adaptively decodes the style and content features through correlation, significantly enhancing the length dependency of local and global information. Additionally, this paper proposes a Content Correction Module (CCM) to eliminate redundant features generated during the style transfer process, further optimizing the migration results. In light of the scarcity of existing datasets for Wuhu Iron Paintings, this study also collects and constructs a dedicated dataset for the style transfer of Wuhu Iron Paintings. Our method achieves optimal performance in terms of loss metrics, with a reduction of at least 4% in style loss and 5% in content loss compared to other advanced methods. Moreover, expert evaluations were conducted to validate the effectiveness of our approach, and the results show that our method received the highest number of votes, further demonstrating its superiority. Full article
Show Figures

Figure 1

18 pages, 4262 KiB  
Article
Cyclic Consistent Image Style Transformation: From Model to System
by Jun Peng, Kaiyi Chen, Yuqing Gong, Tianxiang Zhang and Baohua Su
Appl. Sci. 2024, 14(17), 7637; https://doi.org/10.3390/app14177637 - 29 Aug 2024
Cited by 2 | Viewed by 1869
Abstract
Generative Adversarial Networks (GANs) have achieved remarkable success in various tasks, including image generation, editing, and reconstruction, as well as in unsupervised and representation learning. Despite their impressive capabilities, GANs are often plagued by challenges such as unstable training dynamics and limitations in [...] Read more.
Generative Adversarial Networks (GANs) have achieved remarkable success in various tasks, including image generation, editing, and reconstruction, as well as in unsupervised and representation learning. Despite their impressive capabilities, GANs are often plagued by challenges such as unstable training dynamics and limitations in generating complex patterns. To address these challenges, we propose a novel image style transfer method, named C3GAN, which leverages CycleGAN architecture to achieve consistent and stable transformation of image style. In this context, “image style” refers to the distinct visual characteristics or artistic elements, such as the color schemes, textures, and brushstrokes that define the overall appearance of an image. Our method incorporates cyclic consistency, ensuring that the style transformation remains coherent and visually appealing, thus enhancing the training stability and overcoming the generative limitations of traditional GAN models. Additionally, we have developed a robust and efficient image style transfer system by integrating Flask for web development and MySQL for database management. Our system demonstrates superior performance in transferring complex styles compared to existing model-based approaches. This paper presents the development of a comprehensive image style transfer system based on our advanced C3GAN model, effectively addressing the challenges of GANs and expanding application potential in domains such as artistic creation and cinematic special effects. Full article
Show Figures

Figure 1

19 pages, 2960 KiB  
Article
An Improved Detail-Enhancement CycleGAN Using AdaLIN for Facial Style Transfer
by Jingyun Liu, Han Liu, Yuxin He and Shuo Tong
Appl. Sci. 2024, 14(14), 6311; https://doi.org/10.3390/app14146311 - 19 Jul 2024
Cited by 4 | Viewed by 3431
Abstract
The rise of comics and games has led to increased artistic processing of portrait photos. With growing commercial demand and advancements in deep learning, neural networks for rapid facial style transfer have become a key research area in computer vision. This involves converting [...] Read more.
The rise of comics and games has led to increased artistic processing of portrait photos. With growing commercial demand and advancements in deep learning, neural networks for rapid facial style transfer have become a key research area in computer vision. This involves converting face photos into different styles while preserving content. Face images are more complex than regular images, requiring extensive modification. However, current methods often face issues such as unnatural color transitions, loss of detail in highlighted areas, and noticeable artifacts along edges, resulting in low-quality stylized images. In this study, an enhanced generative adversarial network (GAN) is proposed, which is based on Adaptive Layer Instance Normalization (AdaLIN) + Laplacian. This approach incorporates the AdaLIN normalization method, allowing for the dynamic adjustment of Instance Normalization (IN) and Layer Normalization (LN) parameters’ weights during training. By combining the strengths of both normalization techniques, the model selectively preserves and alters content information to some extent, aiming to strike a balance between style and content. This helps address problems such as unnatural color transitions and loss of details in highlights that lead to color inconsistencies. Furthermore, the introduction of a Laplacian regularization term aids in denoising the image, preventing noise features from interfering with the color transfer process. This regularization also helps reduce color artifacts along the face’s edges caused by noise while maintaining the image’s contour information. These enhancements significantly enhance the quality of the generated face images. To compare our method with traditional CycleGAN and recent algorithms such as XGAN and CariGAN, both subjective and objective evaluations were conducted. Subjectively, our method demonstrates more natural color transitions and superior artifact elimination, achieving higher scores in Mean Opinion Score (MOS) evaluations. Objectively, experiments using our method yielded better scores across three metrics: FID, SSIM, and MS-SSIM. The effectiveness of the proposed methods is validated through both objective and subjective evaluations. Full article
Show Figures

Figure 1

21 pages, 11701 KiB  
Article
GOYA: Leveraging Generative Art for Content-Style Disentanglement
by Yankun Wu, Yuta Nakashima and Noa Garcia
J. Imaging 2024, 10(7), 156; https://doi.org/10.3390/jimaging10070156 - 26 Jun 2024
Cited by 3 | Viewed by 2560
Abstract
The content-style duality is a fundamental element in art. These two dimensions can be easily differentiated by humans: content refers to the objects and concepts in an artwork, and style to the way it looks. Yet, we have not found a way to [...] Read more.
The content-style duality is a fundamental element in art. These two dimensions can be easily differentiated by humans: content refers to the objects and concepts in an artwork, and style to the way it looks. Yet, we have not found a way to fully capture this duality with visual representations. While style transfer captures the visual appearance of a single artwork, it fails to generalize to larger sets. Similarly, supervised classification-based methods are impractical since the perception of style lies on a spectrum and not on categorical labels. We thus present GOYA, which captures the artistic knowledge of a cutting-edge generative model for disentangling content and style in art. Experiments show that GOYA explicitly learns to represent the two artistic dimensions (content and style) of the original artistic image, paving the way for leveraging generative models in art analysis. Full article
Show Figures

Figure 1

22 pages, 11519 KiB  
Article
Modern Muralists in the Spotlight: Technical and Material Characteristics of the 1946–1949 Mural Paintings by Almada Negreiros in Lisbon (Part1)
by Milene Gil, Inês Cardoso, Mafalda Costa and José C. Frade
Heritage 2024, 7(6), 3310-3331; https://doi.org/10.3390/heritage7060156 - 14 Jun 2024
Cited by 3 | Viewed by 4212
Abstract
This paper presents the first insight into how Almada Negreiros, a key artist of the first generation of modernism in Portugal, created his mural painting masterpiece in the maritime station of Rocha do Conde de Óbidos in Lisbon. This set of six monumental [...] Read more.
This paper presents the first insight into how Almada Negreiros, a key artist of the first generation of modernism in Portugal, created his mural painting masterpiece in the maritime station of Rocha do Conde de Óbidos in Lisbon. This set of six monumental mural paintings dates from 1946 to 1949 and is considered Almada’s artistic epitome. As part of the ALMADA project: Unveiling the mural painting art of Almada Negreiros, the murals are being analyzed from a technical and material perspective to understand his modus operandi and the material used. This is the first study of this nature carried out on site and in the laboratory using standard and more advanced imaging, non-invasive analysis, and microanalysis techniques. This article reports the results obtained with visual examination, technical photography in visible (Vis), visible raking (Vis-Rak), complemented by 2D and 3D optical microscopy (OM), scanning electron microscopy with energy-dispersive spectrometry (SEM-EDS), and Fourier transform infrared micro-spectroscopy (µ-FTIR) of the paint layers. The results show the similarities, differences, and technical difficulties that the painter may have had when working on the first, third, and presumably last mural to be painted. Vis-Rak light images were particularly useful in providing a clear idea of how the work progressed from top to bottom through large sections of plaster made with lime mortars. It also revealed an innovative pounced technique used by Almada Negreiros to transfer the drawings in full scale to the walls. Other technical characteristics highlighted by the analytical setup are the use of textured, opaque, and transparent paint layers. The structure of the paintings does not follow a rigid build-up from light to dark, showing that the artist freely adapted according to the motif represented. As far as the colour palette is concerned, Almada masterfully uses primary and complementary colours made with Fe-based pigments and with synthetic ultramarine blue, cadmium pigments, and emerald green. Full article
(This article belongs to the Section Cultural Heritage)
Show Figures

Figure 1

14 pages, 11759 KiB  
Article
Design and Implementation of Dongba Character Font Style Transfer Model Based on AFGAN
by Congwang Bao, Yuan Li and En Lu
Sensors 2024, 24(11), 3424; https://doi.org/10.3390/s24113424 - 26 May 2024
Cited by 2 | Viewed by 1635
Abstract
Dongba characters are ancient ideographic scripts with abstract expressions that differ greatly from modern Chinese characters; directly applying existing methods cannot achieve the font style transfer of Dongba characters. This paper proposes an Attention-based Font style transfer Generative Adversarial Network (AFGAN) method. Based [...] Read more.
Dongba characters are ancient ideographic scripts with abstract expressions that differ greatly from modern Chinese characters; directly applying existing methods cannot achieve the font style transfer of Dongba characters. This paper proposes an Attention-based Font style transfer Generative Adversarial Network (AFGAN) method. Based on the characteristics of Dongba character images, two core modules are set up in the proposed AFGAN, namely void constraint and font stroke constraint. In addition, in order to enhance the feature learning ability of the network and improve the style transfer effect, the Convolutional Block Attention Module (CBAM) mechanism is added in the down-sampling stage to help the network better adapt to input font images with different styles. The quantitative and qualitative analyses of the generated font and the real font were conducted by consulting with professional artists based on the newly built small seal script, slender gold script, and Dongba character dataset, and the styles of the small seal script and slender gold script were transferred to Dongba characters. The results indicate that the proposed AFGAN method has advantages in evaluation indexes and visual quality compared to existing networks. At the same time, this method can effectively learn the style features of small seal script and slender gold script, and transfer them to Dongba characters, indicating the effectiveness of this method. Full article
(This article belongs to the Special Issue AI-Driven Sensing for Image Processing and Recognition)
Show Figures

Figure 1

13 pages, 5796 KiB  
Article
Contribution of Plant Transfer Printing to Sustainable Fashion
by Irena Šabarić, Ana Sutlović, Jana Filipčić and Franka Karin
Sustainability 2024, 16(11), 4361; https://doi.org/10.3390/su16114361 - 22 May 2024
Cited by 3 | Viewed by 2871
Abstract
Nowadays, there is a growing awareness of environmental protection, new findings in the field of sustainable chemistry, the use of biodegradable materials, and the increased use of eco-friendly textile products. For this reason, natural dyes are being used more and more frequently, giving [...] Read more.
Nowadays, there is a growing awareness of environmental protection, new findings in the field of sustainable chemistry, the use of biodegradable materials, and the increased use of eco-friendly textile products. For this reason, natural dyes are being used more and more frequently, giving rise to a new way of decorating textiles, namely, plant transfer printing, popularly known as “eco-printing”, in which the shape and/or pigment of a plant is transferred to the textile. In addition, the great interest of the young generation in the application and research into the use of natural dyes can create incentives for cultural and social sustainability through the preservation of national heritage. Plant transfer printing is a method that combines scientific technology and artistic design with corresponding benefits for the eco system. The very fact that the patterns are unique and unpredictable brings out the notion of artistic freedom. In the work, plant transfer printing was carried out on undyed cotton material and on material dyed with pomegranate peels, walnut leaves, coffee, and aleppo pine bark. The influence of the pH value and the capillarity of the fabric, as well as the treatment of the leaves with iron(II) sulphate heptahydrate solution, on the aesthetics of the print and the colour fastness during washing was investigated. Based on the optimised parameters and a sustainable fabric design, the clothing collection “Hamadryad”, inspired by Greek mythology, was realised. Full article
Show Figures

Figure 1

Back to TopTop