Next Article in Journal
A Modified Selected Mapping Scheme for Peak-to-Average Power Ratio Reduction in Polar-Coded Orthogonal Frequency-Division Multiplexing Systems
Previous Article in Journal
Improved U-Net for Precise Gauge Dial Segmentation in Substation Inspection Systems: A Study on Enhancing Accuracy and Robustness
Previous Article in Special Issue
A Review of Computer Vision Technology for Football Videos
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

iSight: A Smart Clothing Management System to Empower Blind and Visually Impaired Individuals

1
Algoritmi Research Centre/Intelligent Systems Associate Laboratory, University of Minho, 4800-058 Guimarães, Portugal
2
2Ai, School of Technology, Polytechnic University of Cávado and Ave, 4750-810 Barcelos, Portugal
3
INL—International Iberian Nanotechnology Laboratory, 4715-330 Braga, Portugal
*
Authors to whom correspondence should be addressed.
Information 2025, 16(5), 383; https://doi.org/10.3390/info16050383
Submission received: 17 February 2025 / Revised: 30 April 2025 / Accepted: 30 April 2025 / Published: 3 May 2025
(This article belongs to the Special Issue AI-Based Image Processing and Computer Vision)

Abstract

:
Clothing management is a major challenge for blind and visually impaired individuals to perform independently. This research developed and validated the iSight, a mechatronic smart wardrobe prototype, integrating computer vision and artificial intelligence to identify clothing types, colours, and alterations. Tested with 15 participants, iSight achieved high user satisfaction, with 60% rating it as very accurate in clothing identification, 80% in colour detection, and 86.7% in near-field communication tag recognition. Statistical analyses confirmed its positive impact on confidence, independence, and well-being. Despite the fact that improvements in menu complexity and fabric information were suggested, iSight proves to be a robust, user-friendly assistive tool with the potential to enhance the daily living of blind and visually impaired individuals.

1. Introduction

Blindness and visual impairment affect millions of individuals worldwide, significantly impacting their quality of life. According to the International Agency for the Prevention of Blindness, there are approximately 43 million blind individuals globally, while 295 million suffer from moderate-to-severe visual impairments [1]. Additionally, the World Health Organization (WHO) estimates that at least 2.2 billion people experience near or distance vision impairment, with nearly half of these cases categorized as severe. In Portugal, data from the 2011 census indicate that approximately 900,000 individuals face vision difficulties, including around 28,000 who are blind [2]. Such impairments are not only a physical challenge but also influence psychological and cognitive functioning, with studies showing a clear link between visual impairment and diminished health outcomes and quality of life [3,4]. Assistive technologies have the potential to mitigate the negative impacts of blindness, enhancing autonomy and social inclusion. These innovations align with the United Nations Sustainable Development Goal (SDG) 10, which aims to reduce inequalities and promote inclusion for all individuals [5]. While advancements in assistive devices have primarily focused on navigation, mobility, and object recognition, a critical aspect of daily life remains underexplored: clothing management and aesthetics. Clothing plays a significant role in personal identity, self-expression, and social perception, making its management essential for overall well-being and self-esteem [6,7]. However, for blind individuals, dressing can be a source of lack of self-confidence and stress, as challenges such as recognizing clothing condition, detecting stains, and identifying colours often necessitate reliance on others. Despite the potential of assistive technologies, aesthetic considerations in clothing management for blind people have received limited attention. An analysis of research trends from 2007 to 2023 highlights significant advancements in navigation and mobility technologies for the blind [8,9,10], with increased focus on computer vision and object detection in recent years (Figure 1). Data were obtained from the SCOPUS database, using the following search engine keywords: ‘Navigation AND Blind People’, ‘Mobility AND Blind People’, ‘Object Detection AND Blind People’, ‘Computer Vision AND Blind People’, and ‘Clothes AND Blind People’.

1.1. Limitations of Existing Assistive Technology

Although some assistive technologies have been proposed to support clothing-related tasks for visually impaired individuals, most remain limited in scope and practical usability. For instance, Yang et al. [11] developed a camera-based prototype that verbally describes clothing patterns and colours, helping users identify visual characteristics, such as stripes or checks. In contrast, Medeiros et al. [12] introduced a fingertip-mounted camera system that enables pattern recognition through tactile exploration of garments, simulating a visual scan guided by touch.
Other approaches include Vision4All [13], an AI-powered fashion assistant that identifies garment attributes, such as colours, textures, and categories. While innovative, these systems often focus solely on individual features and do not address more comprehensive needs, such as global garment condition or defect detection.
Smart wardrobe solutions employing Radio Frequency Identification (RFID) [14] and Near Field Communication (NFC) [15] technologies allow for garment tracking and outfit selection. However, they rely heavily on pre-tagged clothing and manual data input, limiting their autonomy and flexibility for blind users. Mobile applications like BrowseWithMe [16] assist with online shopping by structuring web content for better accessibility, yet they do not extract visual characteristics from users’ actual clothes.
Despite their value, these tools typically address only isolated tasks, such as identifying a colour or pattern, and lack the ability to perform full garment analysis, including global colour estimation or defect detection. As a result, blind users often remain dependent on external assistance for clothing selection, quality assessment, and wardrobe organization.

1.2. Gaps in Computer Vision Approaches for Clothing Analysis

Beyond assistive solutions, the field of computer vision has seen advancements in clothing-related tasks, particularly in two main domains: industrial textile defect detection and fashion-oriented garment segmentation. However, these approaches present fundamental limitations when applied to the practical needs of visually impaired users.
In the textile industry, numerous studies have addressed fabric defect detection using machine vision and deep learning techniques [17,18]. For instance, several studies, including those by He et al. [19], Jing et al. [20], Xie and Wu [21], and Han and Yu [22], have demonstrated the effectiveness of deep learning approaches, particularly convolutional neural networks (CNNs) [23], in enhancing surface defect detection accuracy beyond traditional image processing techniques. These models perform well on uniform, tensioned fabric rolls photographed under controlled lighting conditions. However, they are not designed for clothing items in real-world use, which often feature wrinkles, varying textures, irregular folds, and occlusions. Additionally, fabrics in industrial contexts are generally flat and stretched, whereas worn or stored garments exhibit natural deformations that complicate defect detection.
Similarly, fashion-related segmentation studies— including DeepFashion [24], DeepFashion2 [25], ModaNet [26], ATR [27,28], and Chictopia10k [29]—focus primarily on garments worn by human models. These datasets are frequently used in conjunction with pose estimation techniques to detect garment categories, landmarks, and visual attributes. While methods such as Mask R-CNN [30] achieve high accuracy in fashion parsing tasks, they typically segment clothing only partially and from predefined viewpoints.
Such approaches are well-suited for applications like fashion recommendation, attribute recognition, or e-commerce, but are ill-suited for assistive scenarios where garments must be analysed independently, without being worn. In particular, full pixel-wise segmentation of unfolded garments is essential to accurately detect defects and to compute global colour features. Without analysing the entire visible area of a clothing item, any feedback to the user may be incomplete or misleading.
Therefore, despite the technical sophistication of these vision systems, they do not respond to the concrete and practical need of helping blind individuals manage their wardrobe autonomously.

1.3. Proposed Approach

To address these gaps, this research proposes a comprehensive and integrated solution based on computer vision techniques, specifically designed to support blind individuals in the independent management of their clothing. The system is composed of a smart wardrobe prototype equipped with image capture and controlled lighting conditions, enabling garments to be analysed while unfolded and isolated. This controlled environment ensures high-quality image acquisition for the extraction of visual characteristics, including clothing category, colour, and defects (e.g., stains, holes). Each clothing item is associated with an NFC tag, allowing its unique identification and management within the system.
Complementing the physical device, a mobile application acts as a virtual wardrobe, where users can view, manage, and organize their clothing collection. Each analysed item is stored with its visual attributes and categorized into intuitive groups such as tops, bottoms, shoes, and accessories. Through the app, users can consult detailed garment information, organize their wardrobe, and request new analyses by remotely communicating with the smart wardrobe.
Importantly, the development and validation of the system were carried out in cooperation with key Portuguese organizations, including the Association of the Blind and Amblyopes of Portugal (ACAPO) and the Association of Support for the Visually Impaired of Braga District (AADVDB). Additionally, valuable insights were gathered through a national survey, ensuring that the solution effectively aligns with the real needs and preferences of its users. This work addresses a critical technological gap while emphasizing the importance of aesthetics and clothing management in fostering self-confidence, autonomy, and social inclusion among blind individuals. By combining practical functionality with a focus on user experience, this research not only provides a novel technological contribution but also paves the way for future advancements in assistive solutions that prioritize aesthetics and inclusivity.
Furthermore, the solution directly supports the goals of the United Nations Sustainable Development Goal (SDG) 10 [5]—Reduced Inequalities—by promoting equal access to independent living resources and empowering individuals with disabilities to fully participate in society.

1.4. Paper Structure

This paper is organized into six sections. Section 2 describes the methodology applied, followed by the results of the iSight evaluation in Section 3 and the discussion in Section 4. Section 5 presents the key findings and limitations of the study, while Section 6 outlines directions for future research.

2. Methodology

This section provides an overview of the methodology employed in developing the iSight prototype. Building on previously published work [31,32,33], this methodology outlines the development of a mechatronic wardrobe prototype and an accessible mobile application, both of which leverage artificial intelligence and computer vision to identify clothing characteristics such as category, colour, and defects.

2.1. System Overview and Components

The iSight system is composed of three key components: deep learning algorithms, a mechatronic wardrobe, and a mobile application. Together, these components create an interconnected solution that allows users to classify clothing items, detect colours, and identify defects autonomously. The overall system workflow, iSight, is presented in Figure 2, which illustrates the seamless integration between hardware and software to deliver accessible and user-friendly outputs.
The smart wardrobe serves as the core of the system, enabling controlled image acquisition and Near Field Communication (NFC) tag reading. These features are complemented by a mobile application designed to ensure intuitive navigation and compliance with accessibility standards, facilitating interaction with the wardrobe. Advanced AI algorithms are employed to process captured data, providing actionable insights, such as garment classification and defect detection. This integrated approach ensures that users benefit from precise, reliable, and accessible clothing management.
The smart wardrobe prototype was designed to simulate real-world conditions while maintaining portability and scalability for practical applications. Its primary objective is to enable the automated capture and analysis of garments under controlled conditions. The prototype’s compact design allows for adaptability across various settings while integrating all necessary functionalities.

2.2. Development of Deep Learning Algorithms

Previous research studies present three experiments employing deep learning techniques to support automated clothing management for blind people. In the first experiment, clothing category classification was performed using Convolutional Neural Networks (CNNs) via transfer learning. Multiple architectures (e.g., ResNet, MobileNet, GoogleNet) and data augmentation strategies were evaluated, yielding validation accuracies of up to approximately 91% and demonstrating the effectiveness of fine-tuning all network layers with pre-trained weights [32].
The second experiment, depicted in Figure 3, extended the methodology to include clothing segmentation and colour extraction. The developed dataset consists of 2000 images that are equally distributed across eight categories of garments: dresses, jackets, pants, polos, shirts, shoes, shorts, and t-shirts. The dataset was balanced across all classes to ensure fairness in training. It was split into 70% for training, 20% for validation, and 10% for testing.
A fine-tuned YOLOv8s model was used to segment garments under varied background conditions, and the resulting segmentation masks were converted into the Hue Saturation Value (HSV) colour space to extract nine primary colours.
The performance of the YOLOv8s model in segmenting clothing was evaluated using key performance metrics, including precision, recall, and mean average precision (mAP) at an Intersection over Union (IoU) of 0.5. Table 1 summarizes the performance results.
Table 1 presents the performance results for the YOLOv8s model in the task of segmenting clothing across various categories, as well as detailed metrics for each type of garment, focusing on segmentation masks. For the overall performance of the model across all categories (“all”), the model achieved a precision of 0.941, a recall of 0.896, and an AP for an IoU of 0.5 of 0.965. These results indicate a high overall performance of the model when it comes to segmenting masks. In the dress category, the model had a precision of 0.849, a recall of 0.917, and a mAP of 0.941. Even though these results indicate good performance, the precision is slightly lower than in other categories. The jacket category demonstrated excellent performance with a precision of 1.0 and a recall of 0.91, resulting in a mAP of 0.99, demonstrating the models’ excellent ability to identifying and segmenting jackets. In the pants category, a precision of 0.989, a recall of 0.875, and a mAP of 0.97 were obtained, indicating an almost perfect performance in the mask segmentation process. The polo category had a precision of 0.953, a recall of 0.837, and a mAP of 0.96. Despite the high precision, the lower recall indicates that the garment may not have been fully detected. Shirts (“shirt”) achieved a precision of 0.863, a recall of 0.875, and a mAP of 0.925, showing balanced performance but with room for improvement. It is noteworthy that shoes had an almost perfect precision of 0.996, a recall of 1.0, and a mAP of 0.995, indicating excellent performance in this category. In the case of shorts, the precision was 0.992, the recall was 0.833, and the mAP was 0.964. The low recall indicates that some instances may not have been detected despite the high precision. Lastly, the t-shirts category demonstrated good overall performance with a reasonable balance between precision and recall, being 0.887 and 0.917, respectively, also achieving a mAP of 0.971.
This dual approach not only improved classification accuracy but also provided precise colour characterization essential for practical garment management.
In the third experiment, the focus was on the detection and classification of modifications in clothing, specifically stains and holes. Initially, a Mask R-CNN model was employed to detect stains using a small curated dataset [32]. Subsequently, the dataset was expanded to include hole defects, and YOLOv5 variants were fine-tuned using additional data augmentation—namely horizontal flipping, random brightness variation, and rotation—to improve generalization [33]. Although high precision was achieved, the results highlighted challenges related to false negatives, indicating areas for future improvement. The effectiveness of clothing detection is strongly influenced by garment positioning within the detection frame and consistent lighting conditions. During testing, garments were aligned parallel to the device’s sides to ensure repeatability. During training, data augmentation—including random rotations, brightness variation, and partial occlusion—was applied to enhance model robustness.
Overall, these experiments collectively demonstrate that the integration of advanced deep learning methods offers a robust and scalable solution for automated clothing analysis. These models were trained using custom datasets focused on clothing classification, colour identification, and defect detection. Their integration enabled automated analysis aligned with the practical needs of blind users. The detailed methodologies and results are thoroughly documented in the previously published works [32,33], which substantiate the system’s potential for enhancing assistive technologies for blind people.

2.3. Smart Wardrobe Design

The proposed smart wardrobe prototype is developed to simulate real-world garment management scenarios and to enable seamless transition of the hardware infrastructure to individual wardrobes. The wardrobe’s physical dimensions—123 cm in height, 120 cm in width, and 63 cm in depth—were chosen to accommodate a wide range of garment sizes (Figure 4).
The wardrobe integrates several hardware components that are fundamental to its operation:
  • Camera Module: The Raspberry Pi Camera Module V3 captures high-resolution images with a wide field of view, enabling detailed garment analysis.
  • Stepper Motor: A Nema 17 stepper motor, controlled by an A4988 motor driver, rotates garments precisely, ensuring comprehensive coverage.
  • LED Lighting: Uniform illumination is achieved through LED strips, enhancing the visibility of garment details.
  • NFC Reader: An ITEAD PN532 NFC module reads tags affixed to garments, associating each item with a unique identifier.
  • Controller: A Raspberry Pi 4 Model B serves as the system’s computational hub, managing hardware operations and interfacing with the mobile application.
These components are connected through a carefully designed system architecture, which highlights the interconnections between the controller, motor, lighting, camera, and NFC reader. This architecture ensures seamless operation and reliable performance across all functional aspects.
The interior is lined with chroma key fabric to standardize background conditions during image capture, ensuring consistent quality. A wide-angle 12 MP camera module is mounted centrally to capture detailed images of garments, while LED lighting strips provide uniform illumination, as shown in Figure 5.
The design also incorporates a stepper motor, which rotates garments by 180 degrees to allow imaging from both sides. These elements work cohesively to maximize efficiency and reliability during image acquisition. Figure 6 illustrates the placement of key components such as the camera, motor, lighting system, and NFC reader.
The NFC reader integrated into the system assigns a unique identifier to each clothing item, which is then associated with detailed characteristics, such as type and colour. This seamless connection enables users to manage their wardrobe with ease. Once the garment is scanned, the system retrieves and displays its attributes, such as description, type, and colour. This process ensures that all clothing details are readily accessible, providing users with an organised and efficient means of viewing and managing their wardrobe, as illustrated in Figure 7. When users wish to retrieve a specific item, they can simply view its details in the mobile app and then locate the corresponding NFC tag attached to the garment, allowing for easy identification and use.

2.4. Mobile Application Development

The mobile application, iSight, was designed with accessibility as a primary objective, adhering to EN301549 [34] and WCAG 2.1 AA standards [35]. These standards ensure compatibility with assistive technologies, such as screen readers and Braille displays, enabling visually impaired users to interact with the system effectively. In Portugal, these standards are enforced through Portuguese digital interoperability regulation, namely Regulamento Nacional de Interoperabilidade Digital (RNID, in Portuguese) [36], ensuring compliance with digital accessibility regulations.
Accessibility was prioritized during the application’s development to ensure compatibility with assistive technologies, such as screen readers and Braille displays. Key features include gesture-based navigation and auditory feedback, which allow users to interact with the application intuitively. The interface, Figure 8, was developed using a cross-platform framework to ensure compatibility with both iOS and Android devices.
The mobile application serves as the primary medium for user interaction, offering several core functionalities:
  • NFC Tag Reading: Users can identify garments by scanning NFC tags attached to them, facilitating quick and accurate retrieval of clothing details.
  • Garment Management: The application enables users to add, edit, and organize clothing items into categories such as tops, bottoms, and footwear.
  • AI-Powered Analysis: Users can classify garments, detect colours, and identify defects through AI-based inference.
In addition to the functionalities described, Figure 9 illustrates the application’s navigational structure as a flowchart. The diagram is organized into six main menus, each leading to various submenus that detail specific features. This visual layout clarifies the logical flow of the application, showing how users can intuitively navigate through the different sections.
The application communicates with the wardrobe server via HTTP requests, ensuring efficient data transfer and real-time responsiveness. Users can interact with the application using VoiceOver (iOS 17.3) or TalkBack (Android 14.0), receiving auditory descriptions of on-screen elements to navigate menus and access functions.

3. iSight Evaluation

This section presents the comprehensive evaluation of the iSight prototype, designed to assist visually impaired individuals in managing their clothing. The evaluation involved a survey conducted among members of the Portuguese association ACAPO, with a focus on usability, accessibility, and overall impact. A structured testing protocol was implemented under simulated real-world conditions, ensuring the methodology was applicable to everyday scenarios, followed by a detailed questionnaire to gather user feedback. ACAPO’s delegation of visually impaired technology specialists played a vital role in validating the app’s accessibility and functionality throughout the development process. Their involvement ensured that the prototype met essential accessibility criteria, such as compatibility with VoiceOver (iOS), as well as tactile feedback, navigation, and colour contrast. Statistical analyses were performed to uncover significant trends and insights, validating the prototype’s effectiveness and identifying areas for improvement.

3.1. Ethical Considerations

Ethical approval for this study was obtained from the Ethics Committee for Research in Social and Human Sciences (CEICSH) of the University of Minho (approval code CEICSH 185/2023). All participants provided informed consent, ensuring their understanding of the study’s objectives, procedures, and their right to withdraw at any time. Privacy and confidentiality were strictly maintained throughout the study.

3.2. Testing Protocol

The testing protocol followed a structured approach, divided into three key stages. In the first stage, participants were introduced to the iSight prototype, including its features, objectives, and the purpose of the study. In this stage, emphasis was placed on ensuring participants understood how the system worked, its intended functionality, and how to interact with the prototype.
The second stage involved hands-on interaction with the system, where participants performed tasks, such as categorizing clothing items, detecting modifications, and adding new garments to the system, using the NFC functionality. During this phase, participants were guided through scenarios simulating real-world situations where they would need to use the system to identify clothing types, colours, and detect potential defects. This also included evaluating the system’s ability to detect stains and holes in garments.
The final stage focused on collecting feedback. Participants were asked to complete a structured interview and a detailed questionnaire that captured both quantitative ratings and qualitative insights. This comprehensive feedback gathered information on their experience with the system, covering aspects such as usability, accuracy, and functionality.
Tasks were designed to simulate real-world scenarios. For example, participants tested the prototype’s ability to identify clothing colours and categories by placing garments in the wardrobe and using the mobile application, as depicted in Figure 10.
They also evaluated the system’s capacity to detect stains or damage on garments. The NFC verification feature was tested by scanning pre-tagged clothing items to confirm their presence in the database. This structured approach ensured that the evaluation covered a broad range of functionalities.

3.3. Sample Characterization Based on Questionnaire Data

This section provides a comprehensive analysis of the questionnaire results, focusing on the characterization of the sample and its relevance to key topics, such as the iSight prototype’s contribution to users’ daily lives, particularly in terms of confidence, self-esteem, well-being, and independence.
I.
Sample Characterization
The study involved 15 voluntary participants, mainly older adults, which aligns with global statistics showing that visual impairment is more prevalent among older adults [36]. The gender distribution was 60% female and 40% male. In terms of age, 66.7% of the participants were aged 55 or older, 20% were in the 35–44-year range, and 13.3% were between 45 and 54 years old. This data indicates that the feedback and evaluation of the iSight prototype come predominantly from older users, with a smaller representation from the middle-aged group.
This age distribution reinforces the relevance of the iSight prototype for older populations.
Educational qualifications varied widely. For instance, 33.3% of participants completed only 4 years of schooling (first cycle of basic education), 20% completed 6 years, 13.3% completed 9 years, and 20% reached secondary education (12 years). The remaining participants held 11 years of schooling or a bachelor’s degree. This diversity in educational background provides a rich context for understanding how different levels of literacy and cognitive skills influence the usability and acceptance of assistive technologies. Moreover, 60% of participants were retirees—Figure 11—a finding that is consistent with global trends showing a reduced employment rate among people with visual impairments [37].
The sample’s demographic characteristics echo findings from previous research indicating that early-onset visual impairment is often associated with higher educational and subsequent employment outcomes [38,39,40].
Although the number of participants was relatively small, this sample size is in line with established practices in assistive technology research involving blind users, where accessibility and recruitment pose specific challenges [41,42].
To contextualize the study sample and provide a global perspective on visual impairment, the findings from the study can be related to global statistics. According to Lancet Global Healthy [37] in 2020, approximately 43.28 million people are blind, with a prevalence that significantly increases with age. The majority of those affected are older adults, with 77.7% of blind individuals being aged 50 or older. This aligns with the study’s finding that 66.7% of participants are aged 55 or more, highlighting a similar age distribution in the sample.
Globally, in 2020, it was estimated that 18.1 million people of working age (15–64 years) were blind, and 142.6 million had moderate to severe vision impairment (MSVI). The overall reduction in employment for people with blindness or MSVI is estimated to be 30.2%. This significant employment gap underscores the challenges faced by visually impaired individuals in securing and maintaining employment.
The study sample reflects this global scenario, with a high percentage of participants not currently engaged in employment. The relative reduction in employment for people with vision impairment contributes to these losses [37].
Moreover, the global prevalence of blindness shows a higher prevalence among women, who account for 55% of those affected. In this study, women also make up a majority of the participants (60%), reinforcing the gender imbalance observed globally. This demographic similarity underscores the relevance of the study’s findings to the broader context of visual impairment.
To contextualize the study sample and provide a global perspective on visual impairment, the findings from the study can be related to global statistics. According to Lancet Global Healthy [36] in 2020, approximately 43.28 million people are blind, with a prevalence that significantly increases with age. The majority of those affected are older adults, with 77.7% of blind individuals being aged 50 or older. This aligns with the study’s finding that 66.7% of participants are aged 55 or more, highlighting a similar age distribution in the sample.
By addressing the needs of visually impaired individuals through assistive technologies such as the iSight prototype, the aim is to mitigate these employment challenges. Enhancing their ability to independently manage daily tasks, such as clothing selection and maintenance, can improve their quality of life and potentially increase their participation in the workforce. This approach could contribute to reducing the global productivity losses associated with vision impairment.
In addition, an analysis of the educational and professional backgrounds of the study participants reveals an interesting trend. In this study, three out of the four participants who attained a 12th-grade education or higher hold better employment positions and have been visually impaired since birth. The only exception is an individual who became visually impaired between the ages of 18 and 24 and also attained a 12th-grade education. This observation aligns with existing literature, which indicates that individuals with childhood-onset visual impairment tend to achieve higher education levels and subsequently better employment prospects [38]. Additionally, it has been noted that students who are blind or visually impaired are increasingly pursuing postsecondary education, with 16% of freshmen with disabilities identifying as partially sighted or blind [39]. Higher educational attainment is linked to better employment outcomes for individuals with visual impairments [40].
This consistency between the study’s findings and the literature underscores the representativeness of the sample and the relevance of the study’s findings in a broader context.
II.
Type of Visual Impairment
Participants reported diverse ages at the onset of visual impairment, as depicted in Figure 12.
Notably, 33.3% have been visually impaired since birth or early infancy, while others reported later onset (with 20% between 35 and 44 years). Sixty percent indicated that their visual impairment was acquired, whereas 40% attributed their condition to congenital causes. Retinitis pigmentosa was the most prevalent condition (46.7%), followed by other conditions such as diabetic retinopathy and glaucoma (each reported by 6.7–13.3% of participants). These results are consistent with global epidemiological studies [37] and underscore the importance of tailoring the iSight prototype to accommodate diverse user experiences.
III.
Technology Use and Familiarity
In the technology section, the majority (67%) reported not using any dedicated clothing management applications, which suggests a market gap that the iSight prototype could fill. Among the few users of such applications, the Colorino [43] device was most common. When asked about their comfort with technology (e.g., smartphones and computers), 40% felt very comfortable, 53.3% somewhat comfortable, and only 6.7% somewhat uncomfortable. This overall positive comfort level, together with the fact that 93.3% reported daily technology use, confirms that the target audience is both familiar and receptive to new assistive devices. These findings are in line with studies that emphasize the importance of user familiarity for successful adoption of assistive technologies [44,45].
IV.
Accessibility of the iSight Mobile Application
Feedback on the mobile application was very positive. Specifically, 66.7% of participants rated the ease of navigation as “easy” and 33.3% as “very easy”. Importantly, when evaluating overall user experience, 66.7% were very satisfied, and the remaining 33.3% were satisfied, indicating that all users found the application acceptable. In addition, features such as image descriptions, textual alternatives, and the clarity of textual content were rated highly (with more than 50% providing excellent ratings). These outcomes confirm that the interface meets the accessibility requirements essential for visually impaired users.
V.
Usability of the iSight Prototype
The iSight prototype, which integrates the mobile application and smart wardrobe, was considered highly usable by participants. Specifically, 53.3% rated it as “very easy to use”, while the remaining 46.7% rated it as “easy”. In terms of system accuracy, 80% of users reported the colour identification as “very precise”, and 20% as “precise”. For clothing category recognition, 60% rated it “very precise” and 40% “precise”. Furthermore, 86.7% of participants found both stain detection and NFC tag identification to be “highly effective”. These positive evaluations across all users confirm the prototype’s reliability and effectiveness in real-world use cases.
VI.
Perceived Importance and Impact on Quality of Life
Participants uniformly recognized the importance of the iSight prototype in facilitating clothing selection and organization. For both functions, 66.7% strongly agreed and 33.3% agreed that the system was valuable. In assessing the impact on confidence, self-esteem, well-being, and independence, the same distribution of responses was obtained, with all participants reporting a positive impact—66.7% strongly agree and 33.3% agree. Moreover, 80% reported that the prototype greatly improved their quality of life, while the remaining 20% indicated a significant improvement. This universal satisfaction underscores the prototype’s potential to enhance daily living for blind people.

3.4. Statistical Analysis and Relevant Findings

Due to the small sample size and the ordinal nature of the data, nonparametric statistical methods were employed, ensuring more reliable results without assumptions of normality. This approach aligns with established practices in assistive technology research [41,42,46]. Specific methods employed included Fisher’s Exact Test to analyse relationships between categorical variables, particularly when sample sizes are small; Spearman’s Rank Correlation (ρ) for assessing monotonic relationships between variables, providing insights into the association between key factors; and the Mann–Whitney U Test as a non-parametric alternative to the independent samples t-test to compare differences between groups. Three significance levels were considered in the analysis, with p = 0.05 established as the primary threshold for statistical significance. Results were interpreted as follows: p ≤ 0.01 indicating strong statistical significance, p ≤ 0.05 as statistically significant, and p > 0.05 as not statistically significant. Qualitative data analysis was also utilized to extract recurring themes and insights from open-ended responses. Table 2 summarizes the employed statistical tests for each design experiment (D).
For DI.1 (Ease of Navigation vs. Satisfaction), Fisher’s Exact Test yielded a chi-square value of 3.750 with a two-sided p-value of 0.101, indicating no statistically significant association at the 0.05 level. Spearman’s Rank Correlation revealed a moderate positive correlation (ρ = 0.500, p = 0.058), suggesting a positive trend that did not reach statistical significance, likely due to sample size constraints. Similarly, for DI.2 (Ease of Use vs. Satisfaction), Fisher’s Exact Test returned a p-value of 0.608, and Spearman’s analysis showed a weak positive correlation (ρ = 0.189, p = 0.500), implying that overall ease of use does not significantly influence satisfaction in this sample. In contrast, the analysis addressing D2 demonstrated that comfort with technology significantly impacts both ease of navigation and overall user experience. The Mann–Whitney U Test produced significant results (U = 11.00, p = 0.044 for ease of navigation; U = 12.00, p = 0.047 for overall experience). These findings were supported by statistically significant Spearman correlations: a moderate positive association between comfort with technology and ease of navigation (ρ = 0.572, p = 0.026), and a strong association with overall user experience (ρ = 0.627, p = 0.012), reinforcing the strength and reliability of the observed relationships. For the functionality evaluations under D3, Spearman’s Rank Correlation showed a statistically significant moderate positive association between the ability to identify colours and the identification of categories (ρ = 0.612, p = 0.015). However, the associations between identifying colours and detecting stains (ρ = 0.294, p = 0.287) and between identifying NFC tags and detecting stains (ρ = 0.423, p = 0.116) were weak and not statistically significant.
Regarding D4, the correlation between comfort with technology and frequency of technology use was moderate (ρ = 0.488) but did not reach statistical significance (p = 0.065), indicating a positive trend that warrants further investigation with a larger sample.
Finally, for D5, which examined the relationship between iSight functionality and improvements in confidence, self-esteem, well-being, and independence, a strong positive and statistically significant correlation was observed (ρ = 0.700, p = 0.004). This result was further validated by Fisher’s Exact Test (p = 0.017), underscoring the significant impact of the prototype’s functionalities on users’ psychological and emotional well-being. In summary, while the association between ease of navigation and overall satisfaction (DI) did not reach statistical significance, the results clearly indicate that higher comfort with technology (D2) and enhanced functionality of the iSight prototype (D5) are strongly associated with improved user experience and positive psychosocial outcomes. These findings provide valuable insights for refining assistive technologies, emphasizing the importance of an intuitive interface and accommodating varying levels of technological familiarity.
To complement the quantitative findings, a qualitative analysis was conducted on the non-numeric data obtained from open-ended questionnaire responses. This analysis aimed to identify recurring patterns, themes, and insights that offer a deeper understanding of user experiences and perceptions of the iSight prototype. Special care was taken to ensure that the translation of responses from Portuguese to English preserved the original meaning while maintaining clarity and readability.
The feedback was systematically transcribed and merged to eliminate duplicate suggestions. The responses were then coded and grouped into several key themes: Performance Improvements, Feature Enhancements, and Localization Improvements, with an additional category for General Positive Feedback. Table 3 summarizes these themes along with representative example comments and the corresponding components affected in the prototype.
In summary, the qualitative analysis reveals a strong overall positive reception of the iSight prototype, while also identifying specific areas for improvement. Users consistently highlighted the need to streamline the mobile interface by reducing menu complexity, which could lead to faster decision-making and improved navigation. Additionally, the demand for more detailed information regarding fabric properties and care labels suggests that expanding the prototype’s feature set—particularly within its AI algorithms—could further enhance its utility. The integration of these insights with the quantitative results offers a robust framework for future development, ensuring that iterative refinements are closely aligned with the diverse needs of blind people.

4. Discussion

The results demonstrate that while the direct association between ease of navigation/overall ease of use and overall satisfaction did not reach statistical significance, the trend suggests that intuitive navigation may enhance satisfaction. This finding aligns with prior research that emphasizes the importance of user-friendly interfaces in assistive technologies [47,48].
Significantly, user comfort with technology strongly influenced both ease of navigation and overall user experience, supporting previous studies which found that familiarity with technology is critical for positive usability outcomes [44,45]. The strong correlations observed in this study underscore the need for developers to account for varying levels of technological proficiency in design.
In addition, the iSight prototype’s effectiveness in identifying clothing characteristics (particularly the significant association between colour and category identification) and its robust impact on psychological factors, such as well-being, confidence, and independence, reflect the prototype’s potential to enhance both practical and emotional aspects of daily life. These results are consistent with state-of-the-art literature that documents the positive influence of assistive technologies on self-efficacy and social inclusion [49,50].
Qualitative feedback further corroborates these quantitative findings. Users’ suggestions to streamline the user interface and incorporate more detailed functionalities are in line with current trends in assistive technology design, which advocate for a balance between simplicity and comprehensive support [51,52].

5. Conclusions

This research addresses a significant challenge faced by visually impaired individuals: managing and organizing clothing independently. For many, this task often requires assistance from family members or caregivers, which can undermine their sense of autonomy and self-esteem. The work presented bridges critical gaps in the existing literature by proposing an innovative system that leverages advanced computer vision and artificial intelligence technologies to empower blind individuals in this domain.
The iSight prototype represents a novel, integrated solution that includes a mechatronic smart wardrobe system and an accessible mobile application. This system is designed to identify the category, colour, and condition of garments while providing additional functionalities, such as detecting defects (e.g., stains and holes) and facilitating wardrobe organization. Importantly, this work stands apart from existing solutions, which are often either insufficiently accurate or inaccessible for independent use by visually impaired individuals. A key achievement of this research was the successful validation of the prototype through testing with visually impaired participants in collaboration with ACAPO. Feedback from participants highlighted the system’s ease of use, effectiveness, and significant impact on daily life. Users reported greater confidence, self-esteem, and independence in managing their clothing, reinforcing the practical value of the iSight prototype. Statistical analyses further corroborated these findings, revealing strong correlations between the use of the system and improvements in psychological and emotional well-being.
From a technical perspective, the integration of advanced AI algorithms allowed for high accuracy in identifying garment categories and colours. The implementation of a deep learning-based image processing system achieved impressive precision and recall rates, while data augmentation techniques further enhanced model performance. Additionally, the prototype demonstrated strong usability, with participants rating navigation and accessibility features highly. These outcomes suggest that the iSight system not only meets functional requirements but also exceeds user expectations in terms of ease of use and satisfaction. The collaboration with ACAPO was instrumental in shaping the development of the prototype. By engaging directly with the intended user base, the research ensured that the final prototype was aligned with the specific needs and preferences of blind people. This participatory approach highlights the importance of user-centred design in developing assistive technologies. Moreover, the study contributes to broader societal goals, including the promotion of autonomy and social inclusion for disabled individuals, thereby aligning with SGD 10.
In summary, the iSight prototype offers a transformative solution to the challenges of clothing management for blind individuals. By integrating a smart wardrobe with a user-friendly mobile application, the system empowers users to independently manage and verify the condition of their clothing—an area that has been unexplored. The prototype leverages advanced computer vision techniques and artificial intelligence to perform clothing classification, segmentation, and defect detection, all tailored to meet the specific needs of blind people. In doing so, it not only enhances the autonomy, confidence, and overall well-being of its users but also pioneers the combined use of AI for identifying clothing categories, colours, and defects within assistive technologies. By combining advanced technologies with a user-friendly design, this research demonstrates how innovation can address real-world problems and significantly enhance quality of life. These results underscore the prototype’s potential for broader application and significant real-world impact.

6. Future Work

Building on the achievements of this research, several directions for future development and improvement have been identified. These focus on enhancing usability, technical performance, and accessibility to further improve the iSight system’s effectiveness in supporting clothing management for individuals who are blind or visually impaired.
A primary area for improvement lies in simplifying the user interface to make it even more intuitive and accessible. This includes reducing menu complexity, optimizing navigation speed, and integrating voice commands for hands-free operation. These adjustments aim to make it more intuitive and accessible for users with varying levels of technical proficiency. From a technical perspective, there is potential to refine the AI algorithms to achieve even greater accuracy and robustness. Expanding the training dataset to include a wider variety of clothing categories, colours, and conditions will help the model generalize more effectively. Also, exploring the integration of image captioning as part of the system’s evolution to further enhance the user experience and expand the accessibility of the system.
In conclusion, while the iSight prototype represents a significant advancement in assistive technology for blind people, its potential for further development and application is immense. By pursuing these future directions, the research can continue to push the boundaries of innovation, delivering practical, impactful solutions that empower users and enhance their independence and quality of life.

Author Contributions

Conceptualization, D.R., C.P.L., F.S. and V.C.; methodology, D.R., C.P.L., F.S. and V.C.; software, D.R.; validation, D.R, C.P.L., F.S. and V.C.; formal analysis, D.R., C.P.L., F.S. and V.C.; investigation, D.R., C.P.L., F.S. and V.C.; resources, F.S. and V.C.; data curation, D.R.; writing—original draft preparation, D.R.; writing—review and editing, D.R. and V.C.; visualization, D.R. and V.C.; supervision, F.S. and V.C.; project administration, F.S. and V.C.; funding acquisition, F.S. and V.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been supported by national funds through FCT—Fundação para a Ciência e Tecnologia—within the projects scope: UIDB/00319/Centro ALGORITMI (ALGORITMI/UM), UIDB/05549:2Ai, UIDP/05549:2Ai, UIDP/04077/2020, and UIDB/04077/2020.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board (or Ethics Committee) of University of Minho (CEICSH 185/2023) on 28 December 2023.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The datasets presented in this article are not readily available because the data are part of an ongoing study. Requests to access the datasets should be directed to the corresponding authors.

Acknowledgments

This work had the support of Association of the Blind and Amblyopes of Portugal (ACAPO) and Association of Support for the Visually Impaired of Braga District (AADVDB).

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
AADVDBAssociation of Support for the Visually Impaired of Braga District
ACAPOAssociation of the Blind and Amblyopes of Portugal
AIArtificial Intelligence
CEICSHEthics Committee for Research in Social and Human Sciences
HSVHue Saturation Value
NFCNear Field Communication
RFIDRadio Frequency Identification
RNIDRegulamento Nacional de Interoperabilidade Digital
SGDSustainable Development Goal
WHOWorld Health Organization

References

  1. Magnitude and Projections—The International Agency for the Prevention of Blindness. Available online: https://www.iapb.org/learn/vision-atlas/magnitude-and-projections/ (accessed on 15 December 2023).
  2. INE. Census 2011: XV General Population Census and V General Housing Census; INE Statistics Portugal: Lisbon, Portugal, 2012. [Google Scholar]
  3. Chia, E.-M.; Mitchell, P.; Ojaimi, E.; Rochtchina, E.; Wang, J.J. Assessment of vision-related quality of life in an older population subsample: The Blue Mountains Eye Study. Ophthalmic Epidemiol. 2006, 13, 371–377. [Google Scholar] [CrossRef] [PubMed]
  4. Langelaan, M.; de Boer, M.R.; van Nispen, R.M.A.; Wouters, B.; Moll, A.C.; van Rens, G.H.M.B. Impact of visual impairment on quality of life: A comparison with quality of life in the general population and with other chronic conditions. Ophthalmic Epidemiol. 2007, 14, 119–126. [Google Scholar] [CrossRef] [PubMed]
  5. Goal 10|Department of Economic and Social Affairs. Available online: https://sdgs.un.org/goals/goal10 (accessed on 17 September 2024).
  6. Johnson, K.; Lennon, S.J.; Rudd, N. Dress, body and self: Research in the social psychology of dress. Fash. Text. 2014, 1, 20. [Google Scholar] [CrossRef]
  7. Adam, H.; Galinsky, A.D. Enclothed cognition. J. Exp. Soc. Psychol. 2012, 48, 918–925. [Google Scholar] [CrossRef]
  8. Bhowmick, A.; Hazarika, S.M. An insight into assistive technology for the visually impaired and blind people: State-of-the-art and future trends. J. Multimodal User Interfaces 2017, 11, 149–172. [Google Scholar] [CrossRef]
  9. Elmannai, W.; Elleithy, K. Sensor-based assistive devices for visually-impaired people: Current status, challenges, and future directions. Sensors 2017, 17, 565. [Google Scholar] [CrossRef]
  10. Messaoudi, M.D.; Menelas, B.-A.J.; Mcheick, H. Review of Navigation Assistive Tools and Technologies for the Visually Impaired. Sensors 2022, 22, 7888. [Google Scholar] [CrossRef]
  11. Yang, X.; Yuan, S.; Tian, Y. Assistive Clothing Pattern Recognition for Visually Impaired People. IEEE Trans. Human-Machine Syst. 2014, 44, 234–243. [Google Scholar] [CrossRef]
  12. Medeiros, A.J.; Stearns, L.; Findlater, L.; Chen, C.; Froehlich, J.E. Recognizing Clothing Colors and Visual Textures Using a Finger-Mounted Camera: An Initial Investigation. In Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility, ASSETS ’17, Baltimore, MD, USA, 20 October–1 November 2017; Association for Computing Machinery: New York, NY, USA, 2017; pp. 393–394. [Google Scholar] [CrossRef]
  13. Khalid, L.; Gong, W. Vision4All—A Deep Learning Fashion Assistance Solution For Blinds. In Proceedings of the 2022 5th International Conference on Artificial Intelligence and Big Data (ICAIBD), Chengdu, China, 27–30 May 2022; pp. 156–161. Available online: https://api.semanticscholar.org/CorpusID:250504202 (accessed on 16 February 2025).
  14. Goh, K.N.; Chen, Y.Y.; Lin, E.S. Developing a smart wardrobe system. In Proceedings of the 2011 IEEE Consumer Communications and Networking Conference (CCNC) 2011, Las Vegas, NV, USA, 9–12 January 2011; pp. 303–307. [Google Scholar] [CrossRef]
  15. Alabduljabbar, R. An IoT smart clothing system for the visually impaired using NFC technology. Int. J. Sen. Netw. 2022, 38, 46–57. [Google Scholar] [CrossRef]
  16. Stangl, A.J.; Kothari, E.; Jain, S.D.; Yeh, T.; Grauman, K.; Gurari, D. BrowseWithMe: An Online Clothes Shopping Assistant for People with Visual Impairments. In Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility, ASSETS ’18, Galway, Ireland, 22–24 October 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 107–118. [Google Scholar] [CrossRef]
  17. Li, C.; Li, J.; Li, Y.; He, L.; Fu, X.; Chen, J. Fabric Defect Detection in Textile Manufacturing: A Survey of the State of the Art. Secur. Commun. Netw. 2021, 2021, 9948808. [Google Scholar] [CrossRef]
  18. Ngan, H.Y.T.; Pang, G.K.H.; Yung, N.H.C. Automated fabric defect detection—A review. Image Vis. Comput. 2011, 29, 442–458. [Google Scholar] [CrossRef]
  19. He, X.; Wu, L.; Song, F.; Jiang, D.; Zheng, G. Research on Fabric Defect Detection Based on Deep Fusion DenseNet-SSD Network. In Proceedings of the International Conference on Wireless Communication and Sensor Networks, icWCSN 2020, Warsaw, Poland, 13–15 May 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 60–64. [Google Scholar] [CrossRef]
  20. Jing, J.; Wang, Z.; Rätsch, M.; Zhang, H. Mobile-Unet: An efficient convolutional neural network for fabric defect detection. Text. Res. J. 2022, 92, 30–42. [Google Scholar] [CrossRef]
  21. Xie, H.; Wu, Z. A Robust Fabric Defect Detection Method Based on Improved RefineDet. Sensors 2020, 20, 4260. [Google Scholar] [CrossRef] [PubMed]
  22. Han, Y.-J.; Yu, H.-J. Fabric Defect Detection System Using Stacked Convolutional Denoising Auto-Encoders Trained with Synthetic Defect Data. Appl. Sci. 2020, 10, 2511. [Google Scholar] [CrossRef]
  23. Bhatt, D.; Patel, C.; Talsania, H.; Patel, J.; Vaghela, R.; Pandya, S.; Modi, K.; Ghayvat, H. CNN Variants for Computer Vision: History, Architecture, Application, Challenges and Future Scope. Electronics 2021, 10, 2470. [Google Scholar] [CrossRef]
  24. Liu, Z.; Luo, P.; Qiu, S.; Wang, X.; Tang, X. DeepFashion: Powering Robust Clothes Recognition and Retrieval with Rich Annotations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016. [Google Scholar] [CrossRef]
  25. Ge, Y.; Zhang, R.; Wu, L.; Wang, X.; Tang, X.; Luo, P. DeepFashion2: A Versatile Benchmark for Detection, Pose Estimation, Segmentation and Re-Identification of Clothing Images. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 5332–5340. Available online: https://api.semanticscholar.org/CorpusID:59158744 (accessed on 16 February 2025).
  26. Zheng, S.; Yang, F.; Kiapour, M.; Piramuthu, R. ModaNet: A Large-Scale Street Fashion Dataset with Polygon Annotations. arXiv 2018, arXiv:1807.01394. [Google Scholar] [CrossRef]
  27. Liang, X.; Liu, S.; Shen, X.; Yang, J.; Liu, L.; Dong, J.; Lin, L.; Yan, S. Deep Human Parsing with Active Template Regression. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 2402–2414. [Google Scholar] [CrossRef]
  28. Liang, X.; Xu, C.; Shen, X.; Yang, J.; Liu, S.; Tang, J.; Lin, L.; Yan, S. Human Parsing with Contextualized Convolutional Neural Network. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV) 2015, Santiago, Chile, 7–13 December 2015; pp. 1386–1394. [Google Scholar] [CrossRef]
  29. Gong, K.; Liang, X.; Zhang, D.; Shen, X.; Lin, L. Look into Person: Self-supervised Structure-sensitive Learning and A New Benchmark for Human Parsing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar] [CrossRef]
  30. He, K.; Gkioxari, G.; Dollar, P.; Girshick, R. Mask R-CNN. In Proceedings of the IEEE International Conference on Computer Vision 2017, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar] [CrossRef]
  31. Rocha, D.; Carvalho, V.; Soares, F.; Oliveira, E.; Leão, C.P. Understand the Importance of Garments’ Identification and Combination to Blind People. In Human Interaction, Emerging Technologies and Future Systems; Ahram, V.T., Taiar, R., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 74–81. [Google Scholar]
  32. Rocha, D.; Soares, F.; Oliveira, E.; Carvalho, V. Blind People: Clothing Category Classification and Stain Detection Using Transfer Learning. Appl. Sci. 2023, 13, 1925. [Google Scholar] [CrossRef]
  33. Rocha, D.; Pinto, L.; Machado, J.; Soares, F.; Carvalho, V. Using Object Detection Technology to Identify Defects in Clothing for Blind People. Sensors 2023, 23, 4381. [Google Scholar] [CrossRef]
  34. Introdução à Norma Europeia EN 301 549—acessibilidade.gov.pt. Available online: https://www.acessibilidade.gov.pt/blogue/categoria-normas/microsoft-publica-videos-sobre-a-norma-europeia-de-acessibilidade-as-tic/ (accessed on 25 July 2024).
  35. WCAG 2 Overview | Web Accessibility Initiative (WAI) | W3C. Available online: https://www.w3.org/WAI/standards-guidelines/wcag/ (accessed on 16 February 2025).
  36. DL n.° 83/2018—Acessibilidade Dos Sítios Web e Das Aplicações Móveis—acessibilidade.gov.pt. Available online: https://www.acessibilidade.gov.pt/blogue/categoria-acessibilidade/dl-n-o-83-2018-acessibilidade-dos-sitios-web-e-das-aplicacoes-moveis/ (accessed on 25 July 2024).
  37. Burton, M.J.; Ramke, J.; Marques, A.P.; A Bourne, R.R.; Congdon, N.; Jones, I.; Tong, B.A.M.A.; Arunga, S.; Bachani, D.; Bascaran, C.; et al. The Lancet Global Health Commission on Global Eye Health: Vision beyond 2020. Lancet Glob. Health 2021, 9, e489–e551. [Google Scholar] [CrossRef]
  38. Pavey, S.; Douglas, G.; Corcoran, C. Transition into adulthood and work — findings from Network 1000. Br. J. Vis. Impair. 2008, 26, 202–216. [Google Scholar] [CrossRef]
  39. Schneider, K. Students Who Are Blind or Visually Impaired in Postsecondary Education. 2001. Available online: https://api.semanticscholar.org/CorpusID:140769224 (accessed on 16 February 2025).
  40. McDonnall, M.C.; Tatch, A. Educational Attainment and Employment for Individuals with Visual Impairments. J. Vis. Impair. Blind. 2021, 115, 152–159. [Google Scholar] [CrossRef]
  41. Caine, K.E. Local Standards for Sample Size at CHI. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems 2016, San Jose, CA, USA, 7–12 May 2016; Available online: https://api.semanticscholar.org/CorpusID:14445824 (accessed on 16 February 2025).
  42. Barnett, S.D.; Heinemann, A.W.; Libin, A.; Houts, A.C.; Gassaway, J.; Sen-Gupta, S.; Resch, A.; Brossart, D.F. Small N designs for rehabilitation research. J. Rehabil. Res. Dev. 2012, 49, 175–186. [Google Scholar] [CrossRef] [PubMed]
  43. Colorino—Color and Light detector—Caretec. Available online: https://www.caretec.at/product/colorino-color-and-light-detector/ (accessed on 15 July 2024).
  44. Bhatt, S.; Agrali, A.; Suri, R.; Ayaz, H. Does Comfort with Technology Affect Use of Wealth Management Platforms? Usability Testing with fNIRS and Eye-Tracking. In Proceedings of theInternational Conference on Applied Human Factors and Ergonomics, Orlando, FL, USA, 21–25 July 2018; Springer: Cham, Switzerland, 2018. Available online: https://api.semanticscholar.org/CorpusID:169196337 (accessed on 16 February 2025).
  45. McLellan, S.G.; Muddimer, A.; Peres, S.C. The effect of experience on system usability scale ratings. J. Usability Stud. Arch. 2012, 7, 56–67. [Google Scholar]
  46. Chung, Y. A Review of Nonparametric Statistics for Applied Research. J. Educ. Behav. Stat. 2016, 41, 455–458. [Google Scholar] [CrossRef]
  47. Martin, J.K.; Martin, L.G.; Stumbo, N.J.; Morrill, J. The impact of consumer involvement on satisfaction with and use of assistive technology. Disabil. Rehabil. Assist. Technol. 2011, 6, 225–242. [Google Scholar] [CrossRef]
  48. Ranada, Å.L.; Lidström, H. Satisfaction with assistive technology device in relation to the service delivery process—A systematic review. Assist. Technol. 2019, 31, 82–97. [Google Scholar] [CrossRef]
  49. Ghafoor, K.; Ahmad, T.; Aslam, M.; Wahla, S.Q. Improving social interaction of the visually impaired individuals through conversational assistive technology. Int. J. Intell. Comput. Cybern. 2023, 17, 126–142. [Google Scholar] [CrossRef]
  50. Shinohara, K.; Wobbrock, J.O. Self-Conscious or Self-Confident? A Diary Study Conceptualizing the Social Accessibility of Assistive Technology. ACM Trans. Access. Comput. 2016, 8, 1–31. [Google Scholar] [CrossRef]
  51. Christy, B.; Pillai, A. User feedback on usefulness and accessibility features of mobile applications by people with visual impairment. Indian. J. Ophthalmol. 2021, 69, 555–558. [Google Scholar] [CrossRef]
  52. Darvishy, A. Accessibility of Mobile Platforms; Springer: Cham, Switzerland, 2014; Available online: https://api.semanticscholar.org/CorpusID:11212078 (accessed on 16 February 2025).
Figure 1. Bar chart illustrating the number of publications between 2007 and 2023 in various research areas concerning blind people.
Figure 1. Bar chart illustrating the number of publications between 2007 and 2023 in various research areas concerning blind people.
Information 16 00383 g001
Figure 2. Workflow of the iSight system, showing the interaction between the user, mobile application, and wardrobe prototype.
Figure 2. Workflow of the iSight system, showing the interaction between the user, mobile application, and wardrobe prototype.
Information 16 00383 g002
Figure 3. Workflow for clothing segmentation and colour extraction.
Figure 3. Workflow for clothing segmentation and colour extraction.
Information 16 00383 g003
Figure 4. iSight smart wardrobe prototype.
Figure 4. iSight smart wardrobe prototype.
Information 16 00383 g004
Figure 5. General view of the wardrobe interior: (A) LED strips applied on the left panel; (B) camera placed in the middle of LED strips.
Figure 5. General view of the wardrobe interior: (A) LED strips applied on the left panel; (B) camera placed in the middle of LED strips.
Information 16 00383 g005
Figure 6. Smart wardrobe: (A) Interior of the wardrobe showing the distance from the camera to the hanger and internal components; (B) motor supporting the hanger.
Figure 6. Smart wardrobe: (A) Interior of the wardrobe showing the distance from the camera to the hanger and internal components; (B) motor supporting the hanger.
Information 16 00383 g006
Figure 7. Clothing identification system based on an NFC tag, with some representative detailed characteristics.
Figure 7. Clothing identification system based on an NFC tag, with some representative detailed characteristics.
Information 16 00383 g007
Figure 8. iSight app main menu.
Figure 8. iSight app main menu.
Information 16 00383 g008
Figure 9. iSight mobile interface block diagram. Flowchart of the iSight mobile application, highlighting key features, such as NFC reading, garment categorization, and AI analysis.
Figure 9. iSight mobile interface block diagram. Flowchart of the iSight mobile application, highlighting key features, such as NFC reading, garment categorization, and AI analysis.
Information 16 00383 g009
Figure 10. ACAPO members interacting with the iSight prototype during the testing phase, showcasing hands-on engagement: (A) Participant using the mechatronic device (smart wardrobe); (B) participant using the iSight mobile application.
Figure 10. ACAPO members interacting with the iSight prototype during the testing phase, showcasing hands-on engagement: (A) Participant using the mechatronic device (smart wardrobe); (B) participant using the iSight mobile application.
Information 16 00383 g010
Figure 11. Distribution of professional background of participants.
Figure 11. Distribution of professional background of participants.
Information 16 00383 g011
Figure 12. Age distribution of the time that visual impairment occurred among participants.
Figure 12. Age distribution of the time that visual impairment occurred among participants.
Information 16 00383 g012
Table 1. Main performance results of model for clothing mask segmentation (Precision, Recall, and AP at IoU = 0.50.
Table 1. Main performance results of model for clothing mask segmentation (Precision, Recall, and AP at IoU = 0.50.
CategoryPrecisionRecallAP at IoU = 0.50AP
all0.9410.8960.9650.949
Dress0.8490.9170.9410.906
Jacket 10.9110.990.955
Pants0.9890.8750.970.96
Polo0.9530.8370.960.952
Shirt0.8630.8750.9250.916
Shoes0.99610.9950.995
Shorts0.8330.8330.9640.955
T-shirt0.9170.9170.9710.954
Table 2. Summary of statistical tests, hypotheses, and results Information 16 00383 i001 corresponds to “Not significant” and denotes no correlation/significance, while Information 16 00383 i002 corresponds to “Significant” and denotes substantial impact/correlation].
Table 2. Summary of statistical tests, hypotheses, and results Information 16 00383 i001 corresponds to “Not significant” and denotes no correlation/significance, while Information 16 00383 i002 corresponds to “Significant” and denotes substantial impact/correlation].
Test/AnalysisHypothesisResult Summary* Significance Level
Fisher’s Exact TestDI.1: Ease of Navigation vs. SatisfactionNot significant (p = 0.101)Information 16 00383 i003
Fisher’s Exact TestDI.2: Ease of Use vs. SatisfactionNot significant (p = 0.608)Information 16 00383 i004
Spearman’s Rank CorrelationDI.1: Ease of Navigation vs. SatisfactionModerate positive correlation, not significant (ρ = 0.500, p = 0.058)Information 16 00383 i005
Spearman’s Rank CorrelationDI.2: Ease of Use vs. SatisfactionWeak positive correlation, not significant (ρ = 0.189, p = 0.500)Information 16 00383 i006
Mann–Whitney U TestD2.1: Ease of Navigation vs. Comfort with TechnologySignificant (U = 11.00, p = 0.044)Information 16 00383 i007
Mann–Whitney U TestD2.2: Overall Experience vs. Comfort with TechnologySignificant (U = 12.00, p = 0.047)Information 16 00383 i008
Spearman’s Rank CorrelationD2.1: Comfort with Technology vs. Ease of NavigationModerate positive correlation, significant (ρ = 0.572, p = 0.026)Information 16 00383 i009
Spearman’s Rank CorrelationD2.2: Comfort with Technology vs. Overall ExperienceStrong positive correlation, significant (ρ = 0.627, p = 0.012)Information 16 00383 i010
Spearman’s Rank CorrelationD3: Identifying Colours and Identifying CategoriesModerate positive association, significant (ρ = 0.612, p = 0.015)Information 16 00383 i011
Spearman’s Rank CorrelationD3: Identifying Colours and Detecting StainsWeak positive association, not significant (ρ = 0.294, p = 0.287)Information 16 00383 i012
Spearman’s Rank CorrelationD3: Identifying NFC Tags and Detecting StainsWeak positive association, not significant (ρ = 0.423, p = 0.116)Information 16 00383 i013
Spearman’s Rank CorrelationD4: Comfort with Technology vs. Frequency of Technology UseModerate positive correlation, not significant (ρ = 0.488, p = 0.065)Information 16 00383 i014
Spearman’s Rank CorrelationD5: iSight Functionality vs. Increased Confidence, Self-esteem, Well-being, and IndependenceStrong positive correlation, significant (ρ = 0.700, p = 0.004)Information 16 00383 i015
Fisher’s Exact TestD5: iSight Functionality vs. Increased Confidence, Self-esteem, Well-being, and IndependenceSignificant (p = 0.017)Information 16 00383 i016
Table 3. User feedback and corresponding affected components in previous section.
Table 3. User feedback and corresponding affected components in previous section.
CategoryFeedback/SuggestionAffected Component
Performance Improvements“This is a very interesting idea for daily use. I would just add fewer menus to make it faster”.Mobile application interface
“The application is useful but I would like it to be faster in making choices, that is, to have fewer menus”.Mobile application interface
Feature Enhancements“Include information such as the fabric of the clothing”.AI algorithms (Fabric identification)
“Read label characteristics such as: washing instructions, ironing temperature, and whether it can be bleached”.AI algorithms (Label detection and reading)
“Check the type of fabric of the clothing”.AI algorithms (Fabric identification)
“Check if there is a mix-up of shoes in similar models”.AI algorithms (Object recognition)
“It would be interesting to also know the location of the stain on the piece of clothing”.AI Algorithms (Stain localization)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rocha, D.; Leão, C.P.; Soares, F.; Carvalho, V. iSight: A Smart Clothing Management System to Empower Blind and Visually Impaired Individuals. Information 2025, 16, 383. https://doi.org/10.3390/info16050383

AMA Style

Rocha D, Leão CP, Soares F, Carvalho V. iSight: A Smart Clothing Management System to Empower Blind and Visually Impaired Individuals. Information. 2025; 16(5):383. https://doi.org/10.3390/info16050383

Chicago/Turabian Style

Rocha, Daniel, Celina P. Leão, Filomena Soares, and Vítor Carvalho. 2025. "iSight: A Smart Clothing Management System to Empower Blind and Visually Impaired Individuals" Information 16, no. 5: 383. https://doi.org/10.3390/info16050383

APA Style

Rocha, D., Leão, C. P., Soares, F., & Carvalho, V. (2025). iSight: A Smart Clothing Management System to Empower Blind and Visually Impaired Individuals. Information, 16(5), 383. https://doi.org/10.3390/info16050383

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop