Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (31)

Search Parameters:
Keywords = virtual try-on

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 10939 KB  
Article
Virtual Try-on-Based Data Augmentation for Robust Person Re-Identification in Emergency Surveillance Scenarios
by Pei Wang, Jiaming Liu, Yuyao Cao and Hui Zhang
Fire 2026, 9(3), 116; https://doi.org/10.3390/fire9030116 - 5 Mar 2026
Viewed by 320
Abstract
Person Re-identification (Re-ID) plays an important role in dynamic evacuation path planning and safety monitoring. However, rapid appearance changes and limited long-term surveillance data significantly degrade model robustness in emergency scenarios. To address this issue, a virtual try-on-based data augmentation framework is proposed [...] Read more.
Person Re-identification (Re-ID) plays an important role in dynamic evacuation path planning and safety monitoring. However, rapid appearance changes and limited long-term surveillance data significantly degrade model robustness in emergency scenarios. To address this issue, a virtual try-on-based data augmentation framework is proposed for person Re-ID. A prompt-based automatic clothing mask generation (PACMG) module integrating Grounding DINO and the Segment Anything Model (SAM) is developed to improve clothing mask accuracy under low-resolution, occlusion, and complex background conditions. A tiered augmentation strategy is further designed to alleviate identity-level imbalance. Experimental results demonstrate that the proposed method increases the clothing replacement validity rate from 52% to 73.61% while preserving identity consistency and distribution stability, as verified through multi-level analyses. When the augmented data are incorporated into the training set, consistent improvements in Rank-1 accuracy and mAP are observed on a ResNet-50-based person Re-ID benchmark. These results indicate that the augmented data enhance robustness to appearance variation, providing practical support for robust person tracking in evacuation scenarios. Full article
(This article belongs to the Special Issue Fire Safety Technology and Intelligent Evacuation)
Show Figures

Figure 1

18 pages, 6996 KB  
Article
AI-Driven Style Transfer Framework Based on 3D Gaussian Splatting for Immersive Experiences
by Kyounghun Kim, Byungsun Hwang, Mingyu Lee, Jinwook Kim, Joonho Seon, Soohyun Kim, Youngghyu Sun, Suhyung Cho and Jinyoung Kim
Appl. Sci. 2026, 16(4), 1889; https://doi.org/10.3390/app16041889 - 13 Feb 2026
Viewed by 273
Abstract
With the recent advancement of virtual try-on (VTO) technology, it is being applied to various fields. Although advancements in VTO technology have enabled not only 2D but also 3D visualization, applying style transfer to hyper-local regions remains challenging due to the complex surface [...] Read more.
With the recent advancement of virtual try-on (VTO) technology, it is being applied to various fields. Although advancements in VTO technology have enabled not only 2D but also 3D visualization, applying style transfer to hyper-local regions remains challenging due to the complex surface curvature and ambiguous boundaries of 3D objects. To address these challenges, we propose an immersive 3D style transfer framework based on 3D Gaussian splatting. A segmentation model is employed to accurately segment target regions, and a large-scale specialized dataset is constructed to capture the morphological diversity of human hands. Furthermore, neural style transfer is integrated with the 3D representation to enable precise style application to hyper-local regions. The proposed framework achieves a mean intersection of union (mIoU) of 0.806 in segmentation and high-fidelity stylization with learned perceptual image patch similarity (LPIPS) and reference-based LPIPS (Ref-LPIPS) scores of 0.1472 and 0.0196, respectively. These results indicate that the proposed framework can provide the quality requirements and immersive VTO experience. Full article
Show Figures

Figure 1

17 pages, 112223 KB  
Article
A Style-Adapted Virtual Try-On Technique for Story Visualization
by Wooseok Choi, Heekyung Yang and Kyungha Min
Electronics 2026, 15(3), 514; https://doi.org/10.3390/electronics15030514 - 25 Jan 2026
Viewed by 350
Abstract
We propose a novel clothing application technique designed for story visualization framework where various characters appear wearing a wide range of outfits. To achieve our goal, we extend a Virtual Try-On framework for synthetic garment fitting. Conventional Virtual Try-On methods are limited to [...] Read more.
We propose a novel clothing application technique designed for story visualization framework where various characters appear wearing a wide range of outfits. To achieve our goal, we extend a Virtual Try-On framework for synthetic garment fitting. Conventional Virtual Try-On methods are limited to generating images of a single person wearing a restricted set of clothes within a fixed style domain. To overcome these limitations, we apply an improved Virtual Try-On model trained with appropriately processed datasets, enabling the generation of upper and lower garments separately across diverse characters and producing images in four distinct styles: photorealistic, webtoon, animation, and watercolor. Our system collects character images and clothing images and performs accurate masking of garment regions. Our system takes a style-specific text prompt as input. Based on these inputs, garment-specific conditioning is applied to synthesize the clothing, followed by a cross-style diffusion process that generates Virtual Try-On images reflecting multiple visual styles. Our approach significantly enhances the adaptability and stylistic diversity of Virtual Try-On technology for story visualization applications. Full article
(This article belongs to the Special Issue Application of Machine Learning in Graphics and Images, 2nd Edition)
Show Figures

Figure 1

23 pages, 7327 KB  
Article
Knit-Pix2Pix: An Enhanced Pix2Pix Network for Weft-Knitted Fabric Texture Generation
by Xin Ru, Yingjie Huang, Laihu Peng and Yongchao Hou
Sensors 2026, 26(2), 682; https://doi.org/10.3390/s26020682 - 20 Jan 2026
Viewed by 258
Abstract
Texture mapping of weft-knitted fabrics plays a crucial role in virtual try-on and digital textile design due to its computational efficiency and real-time performance. However, traditional texture mapping techniques typically adapt pre-generated textures to deformed surfaces through geometric transformations. These methods overlook the [...] Read more.
Texture mapping of weft-knitted fabrics plays a crucial role in virtual try-on and digital textile design due to its computational efficiency and real-time performance. However, traditional texture mapping techniques typically adapt pre-generated textures to deformed surfaces through geometric transformations. These methods overlook the complex variations in yarn length, thickness, and loop morphology during stretching, often resulting in visual distortions. To overcome these limitations, we propose Knit-Pix2Pix, a dedicated framework for generating realistic weft-knitted fabric textures directly from knitted unit mesh maps. These maps provide grid-based representations where each cell corresponds to a physical loop region, capturing its deformation state. Knit-Pix2Pix is an integrated architecture that combines a multi-scale feature extraction module, a grid-guided attention mechanism, and a multi-scale discriminator. Together, these components address the multi-scale and deformation-aware requirements of this task. To validate our approach, we constructed a dataset of over 2000 pairs of fabric stretching images and corresponding knitted unit mesh maps, with further testing using spring-mass fabric simulation. Experiments show that, compared with traditional texture mapping methods, SSIM increased by 21.8%, PSNR by 20.9%, and LPIPS decreased by 24.3%. This integrated approach provides a practical solution for meeting the requirements of digital textile design. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

49 pages, 3200 KB  
Systematic Review
Computer Vision for Fashion: A Systematic Review of Design Generation, Simulation, and Personalized Recommendations
by Ilham Kachbal and Said El Abdellaoui
Information 2026, 17(1), 11; https://doi.org/10.3390/info17010011 - 23 Dec 2025
Viewed by 1909
Abstract
The convergence of fashion and technology has created new opportunities for creativity, convenience, and sustainability through the integration of computer vision and artificial intelligence. This systematic review, following PRISMA guidelines, examines 200 studies published between 2017 and 2025 to analyze computational techniques for [...] Read more.
The convergence of fashion and technology has created new opportunities for creativity, convenience, and sustainability through the integration of computer vision and artificial intelligence. This systematic review, following PRISMA guidelines, examines 200 studies published between 2017 and 2025 to analyze computational techniques for garment design, accessories, cosmetics, and outfit coordination across three key areas: generative design approaches, virtual simulation methods, and personalized recommendation systems. We comprehensively evaluate deep learning architectures, datasets, and performance metrics employed for fashion item synthesis, virtual try-on, cloth simulation, and outfit recommendation. Key findings reveal significant advances in Generative adversarial network (GAN)-based and diffusion-based fashion generation, physics-based simulations achieving real-time performance on mobile and virtual reality (VR) devices, and context-aware recommendation systems integrating multimodal data sources. However, persistent challenges remain, including data scarcity, computational constraints, privacy concerns, and algorithmic bias. We propose actionable directions for responsible AI development in fashion and textile applications, emphasizing the need for inclusive datasets, transparent algorithms, and sustainable computational practices. This review provides researchers and industry practitioners with a comprehensive synthesis of current capabilities, limitations, and future opportunities at the intersection of computer vision and fashion design. Full article
Show Figures

Graphical abstract

21 pages, 3716 KB  
Article
Clothing-Agnostic Pre-Inpainting Virtual Try-On
by Sehyun Kim, Hye Jun Lee, Jiwoo Lee and Taemin Lee
Electronics 2025, 14(23), 4710; https://doi.org/10.3390/electronics14234710 - 29 Nov 2025
Cited by 1 | Viewed by 1818
Abstract
With the development of deep learning technology, virtual try-on technology has developed important application value in the fields of e-commerce, fashion, and entertainment. The recently proposed Leffa technology has addressed the texture distortion problem of diffusion-based models, but there are limitations in that [...] Read more.
With the development of deep learning technology, virtual try-on technology has developed important application value in the fields of e-commerce, fashion, and entertainment. The recently proposed Leffa technology has addressed the texture distortion problem of diffusion-based models, but there are limitations in that the bottom detection inaccuracy and the existing clothing silhouette persist in the synthesis results. To solve this problem, this study proposes CaP-VTON (Clothing-Agnostic Pre-Inpainting Virtual Try-On). CaP-VTON integrates DressCode-based multi-category masking and Stable Diffusion-based skin inflation preprocessing; in particular, a generated skin module was introduced to solve skin restoration problems that occur when long-sleeved images are converted to short-sleeved or sleeveless ones, introducing a preprocessing structure that improves the naturalness and consistency of full-body clothing synthesis and allowing the implementation of high-quality restoration considering human posture and color. As a result, CaP-VTON achieved 92.5%, which is 15.4% better than Leffa, in short-sleeved synthesis accuracy and consistently reproduced the style and shape of the reference clothing in visual evaluation. These structures maintain model-agnostic properties and are applicable to various diffusion-based virtual inspection systems; they can also contribute to applications that require high-precision virtual wearing, such as e-commerce, custom styling, and avatar creation. Full article
Show Figures

Figure 1

21 pages, 4397 KB  
Article
Splatting the Cat: Efficient Free-Viewpoint 3D Virtual Try-On via View-Decomposed LoRA and Gaussian Splatting
by Chong-Wei Wang, Hung-Kai Huang, Tzu-Yang Lin, Hsiao-Wei Hu and Chi-Hung Chuang
Electronics 2025, 14(19), 3884; https://doi.org/10.3390/electronics14193884 - 30 Sep 2025
Viewed by 1545
Abstract
As Virtual Try-On (VTON) technology matures, 2D VTON methods based on diffusion models can now rapidly generate diverse and high-quality try-on results. However, with rising user demands for realism and immersion, many applications are shifting towards 3D VTON, which offers superior geometric and [...] Read more.
As Virtual Try-On (VTON) technology matures, 2D VTON methods based on diffusion models can now rapidly generate diverse and high-quality try-on results. However, with rising user demands for realism and immersion, many applications are shifting towards 3D VTON, which offers superior geometric and spatial consistency. Existing 3D VTON approaches commonly face challenges such as barriers to practical deployment, substantial memory requirements, and cross-view inconsistencies. To address these issues, we propose an efficient 3D VTON framework with robust multi-view consistency, whose core design is to decouple the monolithic 3D editing task into a four-stage cascade as follows: (1) We first reconstruct an initial 3D scene using 3D Gaussian Splatting, integrating the SMPL-X model at this stage as a strong geometric prior. By computing a normal-map loss and a geometric consistency loss, we ensure the structural stability of the initial human model across different views. (2) We employ the lightweight CatVTON to generate 2D try-on images, that provide visual guidance for the subsequent personalized fine-tuning tasks. (3) To accurately represent garment details from all angles, we partition the 2D dataset into three subsets—front, side, and back—and train a dedicated LoRA module for each subset on a pre-trained diffusion model. This strategy effectively mitigates the issue of blurred details that can occur when a single model attempts to learn global features. (4) An iterative optimization process then uses the generated 2D VTON images and specialized LoRA modules to edit the 3DGS scene, achieving 360-degree free-viewpoint VTON results. All our experiments were conducted on a single consumer-grade GPU with 24 GB of memory, a significant reduction from the 32 GB or more typically required by previous studies under similar data and parameter settings. Our method balances quality and memory requirement, significantly lowering the adoption barrier for 3D VTON technology. Full article
(This article belongs to the Special Issue 2D/3D Industrial Visual Inspection and Intelligent Image Processing)
Show Figures

Figure 1

18 pages, 8897 KB  
Article
Exploring User Engagement and Purchase Intentions in T-Shirt Retail Through Augmented Reality and Instagram Filters
by Christopher Girsang and Chin-Hung Teng
Appl. Sci. 2025, 15(18), 10161; https://doi.org/10.3390/app151810161 - 18 Sep 2025
Viewed by 3124
Abstract
Augmented reality (AR) technologies—such as Instagram filters—bridge the digital and physical worlds by allowing users to virtually try on clothing, thereby reducing the risk of virus transmission. In the T-shirt retail industry, AR enables product personalization, decreases the need for physical production, minimizes [...] Read more.
Augmented reality (AR) technologies—such as Instagram filters—bridge the digital and physical worlds by allowing users to virtually try on clothing, thereby reducing the risk of virus transmission. In the T-shirt retail industry, AR enables product personalization, decreases the need for physical production, minimizes textile waste, and lowers carbon emissions. It also benefits individuals with limited mobility or those who prefer shopping online. This study tested several hypotheses on 105 active Instagram filter users using filters from the ’Apprecio’ account on mobile devices. Data analyzed using the partial least squares method revealed that interactivity significantly influences both purchase intention and continued use of digital platforms. While hedonic and vivid features enhance the user experience, they have a limited impact on driving purchases or long-term engagement. Customers’ engagement and buying intent are more strongly shaped by practical and interactive elements. The study recommends that companies invest in developing interactive AR features to boost customer satisfaction and foster trust. Future research should involve larger participant samples and investigate specific interactive elements—such as virtual try-on tools—to better understand their impact on consumer behavior. This study highlights the critical role of interactivity in AR for delivering meaningful and engaging shopping experiences. Full article
(This article belongs to the Special Issue Advances in Human–Machine Interaction)
Show Figures

Figure 1

20 pages, 6720 KB  
Article
UBSP-Net: Underclothing Body Shape Perception Network for Parametric 3D Human Reconstruction
by Xihang Li, Xianguo Cheng, Fang Chen, Furui Shi and Ming Li
Electronics 2025, 14(17), 3522; https://doi.org/10.3390/electronics14173522 - 3 Sep 2025
Viewed by 1390
Abstract
This paper introduces a novel Underclothing Body Shape Perception Network (UBSP-Net) for reconstructing parametric 3D human models from clothed full-body 3D scans, addressing the challenge of estimating body shape and pose beneath clothing. Our approach simultaneously predicts both the internal body point cloud [...] Read more.
This paper introduces a novel Underclothing Body Shape Perception Network (UBSP-Net) for reconstructing parametric 3D human models from clothed full-body 3D scans, addressing the challenge of estimating body shape and pose beneath clothing. Our approach simultaneously predicts both the internal body point cloud and a reference point cloud for the SMPL model, with point-to-point correspondence, leveraging the external scan as an initial approximation to enhance the model’s stability and computational efficiency. By learning point offsets and incorporating body part label probabilities, the network achieves accurate internal body shape inference, enabling reliable Skinned Multi-Person Linear (SMPL) human body model registration. Furthermore, we optimize the SMPL+D human model parameters to reconstruct the clothed human model, accommodating common clothing types, such as T-shirts, shirts, and pants. Evaluated on the CAPE dataset, our method outperforms mainstream approaches, achieving significantly lower Chamfer distance errors and faster inference times. The proposed automated pipeline ensures accurate and efficient reconstruction, even with sparse or incomplete scans, and demonstrates robustness on real-world Thuman2.0 dataset scans. This work advances parametric human modeling by providing a scalable and privacy-preserving solution for applications to 3D shape analysis, virtual try-ons, and animation. Full article
Show Figures

Figure 1

10 pages, 217 KB  
Proceeding Paper
Identification of Factors Influencing Consumers’ Use of Virtual Try-On Technology Based on UTAUT2 Model
by Jen-Ying Shih and Chia-Chieh Yeh
Eng. Proc. 2025, 108(1), 8; https://doi.org/10.3390/engproc2025108008 - 29 Aug 2025
Viewed by 1434
Abstract
We explored the adoption of virtual try-on (VTO) technology in Taiwan’s fashion retail sector, which has gained prominence as consumer behavior online has changed since the COVID-19 pandemic. Using an extended unified theory of acceptance and use of technology 2 (UTAUT2) model, we [...] Read more.
We explored the adoption of virtual try-on (VTO) technology in Taiwan’s fashion retail sector, which has gained prominence as consumer behavior online has changed since the COVID-19 pandemic. Using an extended unified theory of acceptance and use of technology 2 (UTAUT2) model, we examined the behavioral intentions and actual usage of VTO. The original framework of the UTAUT2 model was modified by excluding experience and incorporating personality traits as moderating variables. Based on 257 valid survey responses analyzed using SmartPLS 4.1, influencing factors were identified, revealing that gender was a significant moderator in VTO adoption. Full article
15 pages, 1636 KB  
Article
Examination of Alginite Mineral Supplementation During Fermentation of Probiotics and Its Effect on Skincare Activity of Ferment Lysates
by Pál Tóth and Áron Németh
Appl. Sci. 2025, 15(17), 9350; https://doi.org/10.3390/app15179350 - 26 Aug 2025
Cited by 1 | Viewed by 1017
Abstract
Technological advancements, shifting consumer preferences, and societal changes drive the cosmetics industry to evolve continuously. The cosmetics industry is experiencing a renaissance, with new ingredients that are more environmentally friendly, natural, and transparent in terms of sourcing and manufacturing and, last but not [...] Read more.
Technological advancements, shifting consumer preferences, and societal changes drive the cosmetics industry to evolve continuously. The cosmetics industry is experiencing a renaissance, with new ingredients that are more environmentally friendly, natural, and transparent in terms of sourcing and manufacturing and, last but not least, which are also multifunctional. The use of technology in cosmetics has been rising, including AI (artificial intelligence) and AR (augmented reality) for virtual try-ons, skin analysis tools, and smart beauty devices that provide at-home skincare treatments. Meanwhile, fermented cosmetic ingredients are becoming increasingly popular in the beauty industry due to their improved efficacy and skin benefits. The benefits include enhanced absorption, improved stability (due to the self-produced preservatives), microbiome-friendliness (supporting the skin’s microbiome), and anti-inflammatory and soothing effects. The most common cosmetic ingredients produced by microorganisms are fermented rice, soy, green tea, fruits, and vegetables. Our laboratory investigates a mineral rock called alginite, which has shown many benefits in other fields, such as agriculture and cosmetics (e.g., as a facemask). It has been proven that alginite combined with LAB (lactic acid-producing bacteria) probiotics is beneficial for health and can increase biomass production. However, cell lysates with alginite have never been investigated for cosmetic purposes. This study aimed to investigate the potential of alginite, a mineral rock, in enhancing the cosmetic properties of LAB lysates, specifically focusing on antioxidant effects, skin-whitening properties, and, in preliminary tests, skin-moisturising effects. LAB strains were cultured with and without alginite, and the resulting cell lysates were analysed for these cosmetic applications. The preliminary results suggested that alginite may boost the hydrating effect of LAB lysate, increasing it tenfold compared to LAB lysate alone. The antioxidant effect was enhanced fivefold in the case of Lactobacillus acidophilus when cultured with alginite. However, no significant effect was observed on mushroom tyrosinase inhibition, suggesting no impact on pigment formation. Further research is needed to fully understand the mechanisms underlying these effects and to explore potential applications in cosmetic formulations. Limitations of this study include the focus on specific LAB strains and the need for in vivo studies to confirm the observed effects on human skin. Full article
Show Figures

Figure 1

21 pages, 3250 KB  
Article
Deploying Optimized Deep Vision Models for Eyeglasses Detection on Low-Power Platforms
by Henrikas Giedra, Tomyslav Sledevič and Dalius Matuzevičius
Electronics 2025, 14(14), 2796; https://doi.org/10.3390/electronics14142796 - 11 Jul 2025
Viewed by 2058
Abstract
This research addresses the optimization and deployment of convolutional neural networks for eyeglasses detection on low-power edge devices. Multiple convolutional neural network architectures were trained and evaluated using the FFHQ dataset, which contains annotated eyeglasses in the context of faces with diverse facial [...] Read more.
This research addresses the optimization and deployment of convolutional neural networks for eyeglasses detection on low-power edge devices. Multiple convolutional neural network architectures were trained and evaluated using the FFHQ dataset, which contains annotated eyeglasses in the context of faces with diverse facial features and eyewear styles. Several post-training quantization techniques, including Float16, dynamic range, and full integer quantization, were applied to reduce model size and computational demand while preserving detection accuracy. The impact of model architecture and quantization methods on detection accuracy and inference latency was systematically evaluated. The optimized models were deployed and benchmarked on Raspberry Pi 5 and NVIDIA Jetson Orin Nano platforms. Experimental results show that full integer quantization reduces model size by up to 75% while maintaining competitive detection accuracy. Among the evaluated models, MobileNet architectures achieved the most favorable balance between inference speed and accuracy, demonstrating their suitability for real-time eyeglasses detection in resource-constrained environments. These findings enable efficient on-device eyeglasses detection, supporting applications such as virtual try-ons and IoT-based facial analysis systems. Full article
(This article belongs to the Special Issue Convolutional Neural Networks and Vision Applications, 4th Edition)
Show Figures

Figure 1

19 pages, 17487 KB  
Article
LiteMP-VTON: A Knowledge-Distilled Diffusion Model for Realistic and Efficient Virtual Try-On
by Shufang Zhang, Lei Wang and Wenxin Ding
Information 2025, 16(5), 408; https://doi.org/10.3390/info16050408 - 15 May 2025
Viewed by 4102
Abstract
Diffusion-based approaches have recently emerged as powerful alternatives to GAN-based virtual try-on methods, offering improved detail preservation and visual realism. Despite their advantages, the substantial number of parameters and intensive computational requirements pose significant barriers to deployment on low-resource platforms. To tackle these [...] Read more.
Diffusion-based approaches have recently emerged as powerful alternatives to GAN-based virtual try-on methods, offering improved detail preservation and visual realism. Despite their advantages, the substantial number of parameters and intensive computational requirements pose significant barriers to deployment on low-resource platforms. To tackle these limitations, we propose a diffusion-based virtual try-on framework optimized through feature-level knowledge compression. Our method introduces MP-VTON, an enhanced inpainting pipeline based on Stable Diffusion, which incorporates improved Masking techniques and Pose-conditioned enhancement to alleviate garment boundary artifacts. To reduce model size while maintaining performance, we adopt an attention-guided distillation strategy that transfers semantic and structural knowledge from MP-VTON to a lightweight model, LiteMP-VTON. Experiments demonstrate that LiteMP-VTON achieves nearly a 3× reduction in parameter count and close to 2× speedup in inference, making it well suited for deployment in resource-limited environments without significantly compromising generation quality. Full article
(This article belongs to the Special Issue Intelligent Image Processing by Deep Learning, 2nd Edition)
Show Figures

Figure 1

32 pages, 4258 KB  
Article
User Experience Design for Online Sports Shoe Retail Platforms: An Empirical Analysis Based on Consumer Needs
by Yixin Zou, Chao Zhao, Peter Childs, Dingbang Luh and Xiaoying Tang
Behav. Sci. 2025, 15(3), 311; https://doi.org/10.3390/bs15030311 - 5 Mar 2025
Cited by 4 | Viewed by 6852
Abstract
Digital technologies represented by AR (Augmented Reality), VR (Virtual Reality), and digital twins, along with the expansion of metaverse platforms and digital marketing concepts, have attracted the attention of numerous sports fashion product consumers and brands, particularly in the category of sports shoes. [...] Read more.
Digital technologies represented by AR (Augmented Reality), VR (Virtual Reality), and digital twins, along with the expansion of metaverse platforms and digital marketing concepts, have attracted the attention of numerous sports fashion product consumers and brands, particularly in the category of sports shoes. Therefore, in the context of digital technologies, understanding the factors that affect consumer experience and the preferences in the online purchasing process of sports shoes is very important. This study employs Latent Dirichlet Allocation topic analysis to analyze 44,110 online user posts and comments from social platforms, extracting thematic elements of consumer experience needs for purchasing sports shoes online. The information obtained is further encoded and designed into a questionnaire, which is then utilized alongside the Kano model to analyze the overall preferences of consumer experience needs. The results indicate that webpage design and basic product information are considered as Must-be attributes for user experience needs; providing information on after-sales service policies and product comment, products’ special feature information, and online size testing are recognized as Performance attributes. Additionally, high-tech interaction methods, visual presentation, personalized customization, virtual try-on, apparel matching recommendations, and dressing scenario recommendations are identified as Attractive attributes. The study reveals that in the context of new digital technology development, the online shopping experience for sports shoes is enhanced across four dimensions: platform experience augmentation, product experience augmentation, user demand augmentation, and interactive experience augmentation. These four dimensions collectively constitute the holistic experience design for the online retail platform. Therefore, this research provides case references and theoretical insights for researchers and developers in the fields of brand marketing, experience design, and product service innovation. Full article
Show Figures

Figure 1

18 pages, 19074 KB  
Article
Deep Fashion Designer: Generative Adversarial Networks for Fashion Item Generation Based on Many-to-One Image Translation
by Jaewon Jung, Hyeji Kim and Jongyoul Park
Electronics 2025, 14(2), 220; https://doi.org/10.3390/electronics14020220 - 7 Jan 2025
Cited by 5 | Viewed by 6277
Abstract
Generative adversarial networks (GANs) have demonstrated remarkable performance in various fashion-related applications, including virtual try-ons, compatible clothing recommendations, fashion editing, and the generation of fashion items. Despite this progress, limited research has addressed the specific challenge of generating a compatible fashion item with [...] Read more.
Generative adversarial networks (GANs) have demonstrated remarkable performance in various fashion-related applications, including virtual try-ons, compatible clothing recommendations, fashion editing, and the generation of fashion items. Despite this progress, limited research has addressed the specific challenge of generating a compatible fashion item with an ensemble consisting of distinct categories, such as tops, bottoms, and shoes. In response to this gap, we propose a novel GANs framework, termed Deep Fashion Designer Generative Adversarial Networks (DFDGAN), designed to address this challenge. Our model accepts a series of source images representing different fashion categories as inputs and generates a compatible fashion item, potentially from a different category. The architecture of our model comprises several key components: an encoder, a mapping network, a generator, and a discriminator. Through rigorous experimentation, we benchmark our model against existing baselines, validating the effectiveness of each architectural choice. Furthermore, qualitative results indicate that our framework successfully generates fashion items compatible with the input items, thereby advancing the field of fashion item generation. Full article
(This article belongs to the Special Issue AI-Based Pervasive Application Services)
Show Figures

Figure 1

Back to TopTop