Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (84)

Search Parameters:
Keywords = clothing recognition

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 3748 KB  
Article
Automated Recognition of Rock Mass Discontinuities on Vegetated High Slopes Using UAV Photogrammetry and an Improved Superpoint Transformer
by Peng Wan, Xianquan Han, Ruoming Zhai and Xiaoqing Gan
Remote Sens. 2026, 18(2), 357; https://doi.org/10.3390/rs18020357 - 21 Jan 2026
Viewed by 62
Abstract
Automated recognition of rock mass discontinuities in vegetated high-slope terrains remains a challenging task critical to geohazard assessment and slope stability analysis. This study presents an integrated framework combining close-range UAV photogrammetry with an Improved Superpoint Transformer (ISPT) for semantic segmentation and structural [...] Read more.
Automated recognition of rock mass discontinuities in vegetated high-slope terrains remains a challenging task critical to geohazard assessment and slope stability analysis. This study presents an integrated framework combining close-range UAV photogrammetry with an Improved Superpoint Transformer (ISPT) for semantic segmentation and structural characterization. High-resolution UAV imagery was processed using an SfM–MVS photogrammetric workflow to generate dense point clouds, followed by a three-stage filtering workflow comprising cloth simulation filtering, volumetric density analysis, and VDVI-based vegetation discrimination. Feature augmentation using volumetric density and the Visible-Band Difference Vegetation Index (VDVI), together with connected-component segmentation, enhanced robustness under vegetation occlusion. Validation on four vegetated slopes in Buyun Mountain, China, achieved an overall classification accuracy of 89.5%, exceeding CANUPO (78.2%) and the baseline SPT (85.8%), with a 25-fold improvement in computational efficiency. In total, 4918 structural planes were extracted, and their orientations, dip angles, and trace lengths were automatically derived. The proposed ISPT-based framework provides an efficient and reliable approach for high-precision geotechnical characterization in complex, vegetation-covered rock mass environments. Full article
Show Figures

Figure 1

15 pages, 979 KB  
Article
Hybrid Skeleton-Based Motion Templates for Cross-View and Appearance-Robust Gait Recognition
by João Ferreira Nunes, Pedro Miguel Moreira and João Manuel R. S. Tavares
J. Imaging 2026, 12(1), 32; https://doi.org/10.3390/jimaging12010032 - 7 Jan 2026
Viewed by 184
Abstract
Gait recognition methods based on silhouette templates, such as the Gait Energy Image (GEI), achieve high accuracy under controlled conditions but often degrade when appearance varies due to viewpoint, clothing, or carried objects. In contrast, skeleton-based approaches provide interpretable motion cues but remain [...] Read more.
Gait recognition methods based on silhouette templates, such as the Gait Energy Image (GEI), achieve high accuracy under controlled conditions but often degrade when appearance varies due to viewpoint, clothing, or carried objects. In contrast, skeleton-based approaches provide interpretable motion cues but remain sensitive to pose-estimation noise. This work proposes two compact 2D skeletal descriptors—Gait Skeleton Images (GSIs)—that encode 3D joint trajectories into line-based and joint-based static templates compatible with standard 2D CNN architectures. A unified processing pipeline is introduced, including skeletal topology normalization, rigid view alignment, orthographic projection, and pixel-level rendering. Core design factors are analyzed on the GRIDDS dataset, where depth-based 3D coordinates provide stable ground truth for evaluating structural choices and rendering parameters. An extensive evaluation is then conducted on the widely used CASIA-B dataset, using 3D coordinates estimated via human pose estimation, to assess robustness under viewpoint, clothing, and carrying covariates. Results show that although GEIs achieve the highest same-view accuracy, GSI variants exhibit reduced degradation under appearance changes and demonstrate greater stability under severe cross-view conditions. These findings indicate that compact skeletal templates can complement appearance-based descriptors and may benefit further from continued advances in 3D human pose estimation. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

16 pages, 1813 KB  
Article
Visibility and Usability of Protective Motorcycle Clothing from the Perspective of Car Drivers
by Gihyun Lee, Taehoon Kim, Jungmin Yun, Dae Young Lim, Seungju Lim, Woosung Lee, Seongjin Jang, Jongseok Lee and Hongbum Kim
Appl. Sci. 2025, 15(23), 12375; https://doi.org/10.3390/app152312375 - 21 Nov 2025
Cited by 1 | Viewed by 632
Abstract
Aiming to improve nighttime safety for motorcycle riders, this study evaluates the visibility and usability of LED and retroreflective-equipped protective motorcycle clothing versus conventional retroreflective gear. Ten male participants with driving experience were selected based on specific criteria, including normal or corrected visual [...] Read more.
Aiming to improve nighttime safety for motorcycle riders, this study evaluates the visibility and usability of LED and retroreflective-equipped protective motorcycle clothing versus conventional retroreflective gear. Ten male participants with driving experience were selected based on specific criteria, including normal or corrected visual acuity. Utilizing a simulated driving environment with a 75-inch screen and electric bicycles, the study employed an eye tracker to define recognition distances. It was found that LED and retroreflective-equipped clothing significantly increased the recognition distance in various nighttime scenarios, with the experimental group’s gear being visible from substantially further away than the control group’s gear. Additionally, subjective assessments showed that the LED gear scored higher in visibility and overall satisfaction, though no significant differences in wearability and activity performance were noted between the two groups. These results indicate that LED clothing could enhance rider safety at night, emphasizing the importance of such innovations for safety gear. Despite its focus on SUV drivers and specific conditions, the study provides foundational data for the development of effective protective motorcycle clothing, suggesting future research should include a broader array of vehicle types and environmental conditions. Full article
Show Figures

Figure 1

22 pages, 1770 KB  
Article
Key-Frame-Aware Hierarchical Learning for Robust Gait Recognition
by Ke Wang and Hua Huo
J. Imaging 2025, 11(11), 402; https://doi.org/10.3390/jimaging11110402 - 10 Nov 2025
Viewed by 562
Abstract
Gait recognition in unconstrained environments is severely hampered by variations in view, clothing, and carrying conditions. To address this, we introduce HierarchGait, a key-frame-aware hierarchical learning framework. Our approach uniquely integrates three complementary modules: a TemplateBlock-based Motion Extraction (TBME) for coarse-to-fine anatomical feature [...] Read more.
Gait recognition in unconstrained environments is severely hampered by variations in view, clothing, and carrying conditions. To address this, we introduce HierarchGait, a key-frame-aware hierarchical learning framework. Our approach uniquely integrates three complementary modules: a TemplateBlock-based Motion Extraction (TBME) for coarse-to-fine anatomical feature learning, a Sequence-Level Spatio-temporal Feature Aggregator (SSFA) to identify and prioritize discriminative key-frames, and a Frame-level Feature Re-segmentation Extractor (FFRE) to capture fine-grained motion details. This synergistic design yields a robust and comprehensive gait representation. We demonstrate the superiority of our method through extensive experiments. On the highly challenging CASIA-B dataset, HierarchGait achieves new state-of-the-art average Rank-1 accuracies of 98.1% under Normal (NM), 95.9% under Bag (BG), and 87.5% under Coat (CL) conditions. Furthermore, on the large-scale OU-MVLP dataset, our model attains a 91.5% average accuracy. These results validate the significant advantage of explicitly modeling anatomical hierarchies and temporal key-moments for robust gait recognition. Full article
(This article belongs to the Section Biometrics, Forensics, and Security)
Show Figures

Figure 1

1077 KB  
Proceeding Paper
Enhanced Gait Recognition for Person Identification Using Spatio-Temporal Features and an Attention-Based Deep Learning Model
by Kollaparampil Thomas Thomas and Kaimadathil Pushpangadan Pushpalatha
Eng. Proc. 2025, 118(1), 101; https://doi.org/10.3390/ECSA-12-26532 - 7 Nov 2025
Viewed by 22
Abstract
Human gait has proven to be one of the standard biometrics for human identification. It is a non-invasive biometric method that uses human walking patterns specific to each human being. In most of the traditional methods, we use handcrafted features of simple convolutional [...] Read more.
Human gait has proven to be one of the standard biometrics for human identification. It is a non-invasive biometric method that uses human walking patterns specific to each human being. In most of the traditional methods, we use handcrafted features of simple convolutional models for gait analysis in human identification. Here, we may face challenges addressing complex temporal dependencies in gait sequences. This study proposes a novel deep learning framework that applies multi-feature input representations. It combines Gait Energy Images (GEIs), Frame Difference Gait Images (FDGIs), and Histogram of Oriented Gradients (HOG) features. This is proposed for enhancing the accuracy of human identification. The proposed work implements a CNN-based feature extractor with an attention mechanism for gait recognition. The model is trained and validated on a labeled dataset, showcasing its ability to learn discriminative gait representations with improved accuracy. The proposed pipeline of activities includes preprocessing and converting gait sequences into frames, organizing them using folder-based numerical extraction, followed by the training of an attention-enhanced convolutional network. The proposed model was found to perform better than existing methods on public datasets and works well even with different camera angles and clothing styles. Full article
Show Figures

Figure 1

31 pages, 1455 KB  
Article
A User-Centric Context-Aware Framework for Real-Time Optimisation of Multimedia Data Privacy Protection, and Information Retention Within Multimodal AI Systems
by Ndricim Topalli and Atta Badii
Sensors 2025, 25(19), 6105; https://doi.org/10.3390/s25196105 - 3 Oct 2025
Cited by 1 | Viewed by 1341 | Correction
Abstract
The increasing use of AI systems for face, object, action, scene, and emotion recognition raises significant privacy risks, particularly when processing Personally Identifiable Information (PII). Current privacy-preserving methods lack adaptability to users’ preferences and contextual requirements, and obfuscate user faces uniformly. This research [...] Read more.
The increasing use of AI systems for face, object, action, scene, and emotion recognition raises significant privacy risks, particularly when processing Personally Identifiable Information (PII). Current privacy-preserving methods lack adaptability to users’ preferences and contextual requirements, and obfuscate user faces uniformly. This research proposes a user-centric, context-aware, and ontology-driven privacy protection framework that dynamically adjusts privacy decisions based on user-defined preferences, entity sensitivity, and contextual information. The framework integrates state-of-the-art recognition models for recognising faces, objects, scenes, actions, and emotions in real time on data acquired from vision sensors (e.g., cameras). Privacy decisions are directed by a contextual ontology based in Contextual Integrity theory, which classifies entities into private, semi-private, or public categories. Adaptive privacy levels are enforced through obfuscation techniques and a multi-level privacy model that supports user-defined red lines (e.g., “always hide logos”). The framework also proposes a Re-Identifiability Index (RII) using soft biometric features such as gait, hairstyle, clothing, skin tone, age, and gender, to mitigate identity leakage and to support fallback protection when face recognition fails. The experimental evaluation relied on sensor-captured datasets, which replicate real-world image sensors such as surveillance cameras. User studies confirmed that the framework was effective, with over 85.2% of participants rating the obfuscation operations as highly effective, and the other 14.8% stating that obfuscation was adequately effective. Amongst these, 71.4% considered the balance between privacy protection and usability very satisfactory and 28% found it satisfactory. GPU acceleration was deployed to enable real-time performance of these models by reducing frame processing time from 1200 ms (CPU) to 198 ms. This ontology-driven framework employs user-defined red lines, contextual reasoning, and dual metrics (RII/IVI) to dynamically balance privacy protection with scene intelligibility. Unlike current anonymisation methods, the framework provides a real-time, user-centric, and GDPR-compliant method that operationalises privacy-by-design while preserving scene intelligibility. These features make the framework appropriate to a variety of real-world applications including healthcare, surveillance, and social media. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

17 pages, 897 KB  
Article
Towards a Circular Fashion Future: A Textile Revalorization Model Combining Public and Expert Insights from Chile
by Cristian D. Palma and Priscilla Cabello-Avilez
Sustainability 2025, 17(19), 8670; https://doi.org/10.3390/su17198670 - 26 Sep 2025
Viewed by 1838
Abstract
The global textile industry has a significant environmental impact, driven by fast fashion and rising consumption, which leads to large amounts of waste. In Chile, this problem is especially visible, with thousands of tons of discarded clothing accumulating in open areas and landfills. [...] Read more.
The global textile industry has a significant environmental impact, driven by fast fashion and rising consumption, which leads to large amounts of waste. In Chile, this problem is especially visible, with thousands of tons of discarded clothing accumulating in open areas and landfills. This study explores how to design a practical textile revalorization system grounded in local reality. We used a qualitative mixed-methods approach, combining semi-structured interviews with six experts in textile circularity and an online survey completed by 328 people. Thematic analysis revealed low public awareness of textile recycling, limited consumer participation, and major structural barriers, including scarce infrastructure and unclear regulations. Experts emphasized the importance of coordinated action among government, industry, and grassroots recyclers, while survey respondents highlighted the need for education and easier recycling options. Based on these insights, we propose an integrated framework that combines education campaigns, better recycling systems, and formal recognition of informal recyclers’ work. While centered on Chile, the study offers ideas that could support textile circularity efforts in other countries facing similar challenges. By merging expert knowledge with everyday public perspectives, the approach helps design more realistic and socially grounded solutions for textile waste management. As with many exploratory frameworks, external validation remains a necessary step for future research to strengthen its robustness and applicability. Full article
Show Figures

Figure 1

24 pages, 824 KB  
Article
MMF-Gait: A Multi-Model Fusion-Enhanced Gait Recognition Framework Integrating Convolutional and Attention Networks
by Kamrul Hasan, Khandokar Alisha Tuhin, Md Rasul Islam Bapary, Md Shafi Ud Doula, Md Ashraful Alam, Md Atiqur Rahman Ahad and Md. Zasim Uddin
Symmetry 2025, 17(7), 1155; https://doi.org/10.3390/sym17071155 - 19 Jul 2025
Cited by 1 | Viewed by 1514
Abstract
Gait recognition is a reliable biometric approach that uniquely identifies individuals based on their natural walking patterns. It is widely used to recognize individuals who are challenging to camouflage and do not require a person’s cooperation. The general face-based person recognition system often [...] Read more.
Gait recognition is a reliable biometric approach that uniquely identifies individuals based on their natural walking patterns. It is widely used to recognize individuals who are challenging to camouflage and do not require a person’s cooperation. The general face-based person recognition system often fails to determine the offender’s identity when they conceal their face by wearing helmets and masks to evade identification. In such cases, gait-based recognition is ideal for identifying offenders, and most existing work leverages a deep learning (DL) model. However, a single model often fails to capture a comprehensive selection of refined patterns in input data when external factors are present, such as variation in viewing angle, clothing, and carrying conditions. In response to this, this paper introduces a fusion-based multi-model gait recognition framework that leverages the potential of convolutional neural networks (CNNs) and a vision transformer (ViT) in an ensemble manner to enhance gait recognition performance. Here, CNNs capture spatiotemporal features, and ViT features multiple attention layers that focus on a particular region of the gait image. The first step in this framework is to obtain the Gait Energy Image (GEI) by averaging a height-normalized gait silhouette sequence over a gait cycle, which can handle the left–right gait symmetry of the gait. After that, the GEI image is fed through multiple pre-trained models and fine-tuned precisely to extract the depth spatiotemporal feature. Later, three separate fusion strategies are conducted, and the first one is decision-level fusion (DLF), which takes each model’s decision and employs majority voting for the final decision. The second is feature-level fusion (FLF), which combines the features from individual models through pointwise addition before performing gait recognition. Finally, a hybrid fusion combines DLF and FLF for gait recognition. The performance of the multi-model fusion-based framework was evaluated on three publicly available gait databases: CASIA-B, OU-ISIR D, and the OU-ISIR Large Population dataset. The experimental results demonstrate that the fusion-enhanced framework achieves superior performance. Full article
(This article belongs to the Special Issue Symmetry and Its Applications in Image Processing)
Show Figures

Figure 1

27 pages, 3417 KB  
Article
GaitCSF: Multi-Modal Gait Recognition Network Based on Channel Shuffle Regulation and Spatial-Frequency Joint Learning
by Siwei Wei, Xiangyuan Xu, Dewen Liu, Chunzhi Wang, Lingyu Yan and Wangyu Wu
Sensors 2025, 25(12), 3759; https://doi.org/10.3390/s25123759 - 16 Jun 2025
Cited by 1 | Viewed by 1603
Abstract
Gait recognition, as a non-contact biometric technology, offers unique advantages in scenarios requiring long-distance identification without active cooperation from subjects. However, existing gait recognition methods predominantly rely on single-modal data, which demonstrates insufficient feature expression capabilities when confronted with complex factors in real-world [...] Read more.
Gait recognition, as a non-contact biometric technology, offers unique advantages in scenarios requiring long-distance identification without active cooperation from subjects. However, existing gait recognition methods predominantly rely on single-modal data, which demonstrates insufficient feature expression capabilities when confronted with complex factors in real-world environments, including viewpoint variations, clothing differences, occlusion problems, and illumination changes. This paper addresses these challenges by introducing a multi-modal gait recognition network based on channel shuffle regulation and spatial-frequency joint learning, which integrates two complementary modalities (silhouette data and heatmap data) to construct a more comprehensive gait representation. The channel shuffle-based feature selective regulation module achieves cross-channel information interaction and feature enhancement through channel grouping and feature shuffling strategies. This module divides input features along the channel dimension into multiple subspaces, which undergo channel-aware and spatial-aware processing to capture dependency relationships across different dimensions. Subsequently, channel shuffling operations facilitate information exchange between different semantic groups, achieving adaptive enhancement and optimization of features with relatively low parameter overhead. The spatial-frequency joint learning module maps spatiotemporal features to the spectral domain through fast Fourier transform, effectively capturing inherent periodic patterns and long-range dependencies in gait sequences. The global receptive field advantage of frequency domain processing enables the model to transcend local spatiotemporal constraints and capture global motion patterns. Concurrently, the spatial domain processing branch balances the contributions of frequency and spatial domain information through an adaptive weighting mechanism, maintaining computational efficiency while enhancing features. Experimental results demonstrate that the proposed GaitCSF model achieves significant performance improvements on mainstream datasets including GREW, Gait3D, and SUSTech1k, breaking through the performance bottlenecks of traditional methods. The implications of this research are significant for improving the performance and robustness of gait recognition systems when implemented in practical application scenarios. Full article
(This article belongs to the Collection Sensors for Gait, Human Movement Analysis, and Health Monitoring)
Show Figures

Figure 1

57 pages, 4508 KB  
Review
Person Recognition via Gait: A Review of Covariate Impact and Challenges
by Abdul Basit Mughal, Rafi Ullah Khan, Amine Bermak and Atiq ur Rehman
Sensors 2025, 25(11), 3471; https://doi.org/10.3390/s25113471 - 30 May 2025
Cited by 3 | Viewed by 4526
Abstract
Human gait identification is a biometric technique that permits recognizing an individual from a long distance focusing on numerous features such as movement, time, and clothing. This approach in particular is highly useful in video surveillance scenarios, where biometric systems allow people to [...] Read more.
Human gait identification is a biometric technique that permits recognizing an individual from a long distance focusing on numerous features such as movement, time, and clothing. This approach in particular is highly useful in video surveillance scenarios, where biometric systems allow people to be easily recognized without intruding on their privacy. In the domain of computer vision, one of the essential and most difficult tasks is tracking a person across multiple camera views, specifically, recognizing the similar person in diverse scenes. However, the accuracy of the gait identification system is significantly affected by covariate factors, such as different view angles, clothing, walking speeds, occlusion, and low-lighting conditions. Previous studies have often overlooked the influence of these factors, leaving a gap in the comprehensive understanding of gait recognition systems. This paper provides a comprehensive review of the most effective gait recognition methods, assessing their performance across various image source databases while highlighting the limitations of existing datasets. Additionally, it explores the influence of key covariate factors, such as viewing angle, clothing, and environmental conditions, on model performance. The paper also compares traditional gait recognition methods with advanced deep learning techniques, offering theoretical insights into the impact of covariates and addressing real-world application challenges. The contrasts and discussions presented provide valuable insights for developing a robust and improved gait-based identification framework for future advancements. Full article
(This article belongs to the Special Issue Artificial Intelligence and Sensor-Based Gait Recognition)
Show Figures

Figure 1

24 pages, 10115 KB  
Article
iSight: A Smart Clothing Management System to Empower Blind and Visually Impaired Individuals
by Daniel Rocha, Celina P. Leão, Filomena Soares and Vítor Carvalho
Information 2025, 16(5), 383; https://doi.org/10.3390/info16050383 - 3 May 2025
Viewed by 2702
Abstract
Clothing management is a major challenge for blind and visually impaired individuals to perform independently. This research developed and validated the iSight, a mechatronic smart wardrobe prototype, integrating computer vision and artificial intelligence to identify clothing types, colours, and alterations. Tested with 15 [...] Read more.
Clothing management is a major challenge for blind and visually impaired individuals to perform independently. This research developed and validated the iSight, a mechatronic smart wardrobe prototype, integrating computer vision and artificial intelligence to identify clothing types, colours, and alterations. Tested with 15 participants, iSight achieved high user satisfaction, with 60% rating it as very accurate in clothing identification, 80% in colour detection, and 86.7% in near-field communication tag recognition. Statistical analyses confirmed its positive impact on confidence, independence, and well-being. Despite the fact that improvements in menu complexity and fabric information were suggested, iSight proves to be a robust, user-friendly assistive tool with the potential to enhance the daily living of blind and visually impaired individuals. Full article
(This article belongs to the Special Issue AI-Based Image Processing and Computer Vision)
Show Figures

Figure 1

24 pages, 22571 KB  
Article
Non-Invasive Multivariate Prediction of Human Thermal Comfort Based on Facial Temperatures and Thermal Adaptive Action Recognition
by Kangji Li, Fukang Liu, Yanpei Luo and Mushtaque Ali Khoso
Energies 2025, 18(9), 2332; https://doi.org/10.3390/en18092332 - 2 May 2025
Cited by 2 | Viewed by 1212
Abstract
Accurately assessing human thermal comfort plays a key role in improving indoor environmental quality and energy efficiency of buildings. Non-invasive thermal comfort recognition has shown great application potential compared with other methods. Based on thermal correlation analysis, human facial temperature recognition and body [...] Read more.
Accurately assessing human thermal comfort plays a key role in improving indoor environmental quality and energy efficiency of buildings. Non-invasive thermal comfort recognition has shown great application potential compared with other methods. Based on thermal correlation analysis, human facial temperature recognition and body thermal adaptive action detection are both performed by one binocular infrared camera. The YOLOv5 algorithm is applied to extract facial temperatures of key regions, through which the random forest model is used for thermal comfort recognition. Meanwhile, the Mediapipe tool is used to detect probable thermal adaptive actions, based on which the corresponding thermal comfort level is also assessed. The two results are combined with PMV calculation for multivariate human thermal comfort prediction, and a weighted fusion strategy is designed. Seventeen subjects were invited to participate in experiments for data collection of facial temperatures and thermal adaptive actions in different thermal conditions. Prediction results show that, by using the experiment data, the overall accuracies of the proposed fusion strategy reach 82.86% (7-class thermal sensation voting, TSV) and 94.29% (3-class TSV), which are better than those of facial temperature-based thermal comfort prediction (7-class: 78.57%, 3-class: 90%) and PMV model (7-class: 20.71%, 3-class: 65%). If probable thermal adaptive actions are detected, the accuracy of the proposed fusion model is further improved to 86.8% (7-class) and 100% (3-class). Furthermore, by changing clothing thermal resistance and metabolic level of subjects in experiments, the influence on thermal comfort prediction is investigated. From the results, the proposed strategy still achieves better accuracy compared with other single methods, which shows good robustness and generalization performance in different applications. Full article
(This article belongs to the Section G: Energy and Buildings)
Show Figures

Figure 1

18 pages, 1174 KB  
Article
GaitRGA: Gait Recognition Based on Relation-Aware Global Attention
by Jinhang Liu, Yunfan Ke, Ting Zhou, Yan Qiu and Chunzhi Wang
Sensors 2025, 25(8), 2337; https://doi.org/10.3390/s25082337 - 8 Apr 2025
Cited by 3 | Viewed by 1522
Abstract
Gait recognition, a long-range biometric technique based on walking posture, the fact that they do not require the cooperation of the subject and are non-invasive has made them highly sought after in recent years.Although existing methods have achieved impressive results in laboratory environments, [...] Read more.
Gait recognition, a long-range biometric technique based on walking posture, the fact that they do not require the cooperation of the subject and are non-invasive has made them highly sought after in recent years.Although existing methods have achieved impressive results in laboratory environments, the recognition performance is still deficient in real-world applications, especially when confronted with complex and dynamic scenarios. The major challenges in gait recognition include changes in viewing angle, occlusion, clothing changes, and significant differences in gait characteristics under different walking conditions. To slove these issues, we propose a gait recognition method based on relational-aware global attention. Specifically, we introduce a Relational-aware Global Attention (RGA) module, which captures global structural information within gait sequences to enable more precise attention learning. Unlike traditional gait recognition methods that rely solely on local convolutions, we stack pairwise associations between each feature position in the gait silhouette and all other feature positions, along with the features themselves, using a shallow convolutional model to learn attention. This approach is particularly effective in gait recognition due to the physical constraints on human walking postures, allowing the structural information embedded in the global relationships to aid in inferring the semantics and focus areas of various body parts, thereby improving the differentiation of gait features across individuals. Our experimental results on multiple datasets (Grew, Gait3D, SUSTech1k) demonstrate that GaitRGA achieves significant performance improvements, especially in real-world scenarios. Full article
(This article belongs to the Collection Sensors for Gait, Human Movement Analysis, and Health Monitoring)
Show Figures

Figure 1

14 pages, 4538 KB  
Article
Digital Restoration of Tang Dynasty Ladies’ Costumes in the “Ramming & Washing Silk Painting” Based on AI and 3D Technology
by Guangzhou Zhu, Yixuan Li and Siyi Zhang
Electronics 2025, 14(6), 1139; https://doi.org/10.3390/electronics14061139 - 14 Mar 2025
Cited by 2 | Viewed by 2401
Abstract
Digital restoration has become an effective way to restore traditional costumes, but there are still problems such as low modeling efficiency and difficult pattern recognition. This study focuses on Tang Dynasty ladies in the “ramming & washing silk painting”, integrating AI + 3D [...] Read more.
Digital restoration has become an effective way to restore traditional costumes, but there are still problems such as low modeling efficiency and difficult pattern recognition. This study focuses on Tang Dynasty ladies in the “ramming & washing silk painting”, integrating AI + 3D technology with costume culture for comprehensive restoration. The methodology includes era-specific analysis for identity and body shape determination, image-based sizing, fabric identification, color matching, and pattern restoration using AI and edge-detection algorithms. Comparative analysis with original images guides 3D modeling and AI-assisted restoration of makeup, hairstyles, and accessories. The successful restoration of the lady on the left’s costume in the “ramming & washing silk painting” proves the effectiveness of this AI + 3D method in protecting and innovating traditional clothing. Full article
Show Figures

Figure 1

9 pages, 1973 KB  
Proceeding Paper
Recommender System for Apparel Products Based on Image Recognition Using Convolutional Neural Networks
by Chin-Chih Chang, Chi-Hung Wei, Yen-Hsiang Wang, Chyuan-Huei Thomas Yang and Sean Hsiao
Eng. Proc. 2025, 89(1), 38; https://doi.org/10.3390/engproc2025089038 - 14 Mar 2025
Cited by 1 | Viewed by 2051
Abstract
In e-commerce and fashion, personalized recommendations are used to enhance user experience and engagement. In this study, an apparel recognition and recommender system (ARRS) using convolutional neural networks (CNNs) was employed to analyze apparel images, extract features, and provide accurate recognition and recommendations. [...] Read more.
In e-commerce and fashion, personalized recommendations are used to enhance user experience and engagement. In this study, an apparel recognition and recommender system (ARRS) using convolutional neural networks (CNNs) was employed to analyze apparel images, extract features, and provide accurate recognition and recommendations. By learning patterns and features of clothes, the system enables robust recognition and personalized suggestions. The effectiveness of ARRS in recognizing apparel and generating relevant recommendations was validated. The system enhances user satisfaction and engagement on fashion e-commerce platforms. Full article
Show Figures

Figure 1

Back to TopTop