Next Article in Journal
A Map of the Research About Lighting Systems in the 1995–2024 Time Frame
Previous Article in Journal
Ontology-Based Data Pipeline for Semantic Reaction Classification and Research Data Management
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

User-Centered Design of a Computer Vision System for Monitoring PPE Compliance in Manufacturing

by
Luis Alberto Trujillo-Lopez
,
Rodrigo Alejandro Raymundo-Guevara
and
Juan Carlos Morales-Arevalo
*
Ingeniería de Software, Facultad de Ingeniería, Universidad Peruana de Ciencias Aplicadas, Lima 15023, Peru
*
Author to whom correspondence should be addressed.
Computers 2025, 14(8), 312; https://doi.org/10.3390/computers14080312 (registering DOI)
Submission received: 28 June 2025 / Revised: 27 July 2025 / Accepted: 29 July 2025 / Published: 1 August 2025

Abstract

In manufacturing environments, the proper use of Personal Protective Equipment (PPE) is essential to prevent workplace accidents. Despite this need, existing PPE monitoring methods remain largely manual and suffer from limited coverage, significant errors, and inefficiencies. This article focuses on addressing this deficiency by designing a computer vision desktop application for automated monitoring of PPE use. This system uses lightweight YOLOv8 models, developed to run on the local system and operate even in industrial locations with limited network connectivity. Using a Lean UX approach, the development of the system involved creating empathy maps, assumptions, product backlog, followed by high-fidelity prototype interface components. C4 and physical diagrams helped define the system architecture to facilitate modifiability, scalability, and maintainability. Usability was verified using the System Usability Scale (SUS), with a score of 87.6/100 indicating “excellent” usability. The findings demonstrate that a user-centered design approach, considering user experience and technical flexibility, can significantly advance the utility and adoption of AI-based safety tools, especially in small- and medium-sized manufacturing operations. This article delivers a validated and user-centered design solution for implementing machine vision systems into manufacturing safety processes, simplifying the complexities of utilizing advanced AI technologies and their practical application in resource-limited environments.

1. Introduction

Using personal protective equipment (PPE) is one of the most important strategies for maintaining safety in many industries in the manufacturing sector where employees are exposed to the potential risks of physical and mechanical hazards. However, the overwhelming majority of organizations monitor compliance to PPE standards manually [1]. Thus, any kind of manual monitoring is limited in coverage, efficiency, and reliability. Furthermore, human factors limit the ability to monitor processes continuously and prevent accidents from happening at work.
In light of this challenge, computer vision is a legitimate solution because it can help automate PPE detection methods using deep learning. Models based on You Only Look Once (YOLO) have successfully provided detection of helmets, gloves, and high-visibility vests in a quantitative (real-time detection) fashion in constant-flux industrial environments [2]. Using these technologies not only improves accuracy, but also provides ongoing monitoring without the challenges of human attention, drastically reduces scopes of error, and creates more efficient safety measures for organizations.
To really change this opportunity to a practical solution, the purpose of this project is to develop a desktop application that includes computer vision methods to secure or validate the use of PPE while employees work. For desktop applications installed directly on computers, they do not need any specific external server to run the application, which is very beneficial for small and large companies [3]. Additionally, this application approach entails efficiency, data security and privacy, simplicity of implementation, as well as the desire to better connect advanced technologies to the real needs of manufacturers.
The introduction of automated systems presents some obstacles, such as adapting to less and variable lighting and movements. A current study identifies these issues as an active part of real-world system testing [4]. Another task is to adapt to the increasing acceptance of new technologies in industrial settings where new technology is not commonly adopted. This project aims to overcome the resistance to acceptance of new technologies through its intuitive interface and minimal computational resource needs.
Despite these technical difficulties, the encouragement that will continue to push for measures to improve workplace safety remains. The current study noted some positive early findings, suggesting that computer vision is not merely possible, but indeed required, in the SRM process to produce a safer and more productive workplace [5]. The aim is simple: minimize workplace incidents, save lives, and take the management of risk prevention beyond the current state of practice with the introduction of technology and innovation [6].
The primary objective of the article was to concretely put methodology through systematic and realistic processes, employing Lean User Experience (Lean UX) methodology across all development phases, to design development to match the actual needs of users and better assure effective iteration using direct user feedback. After the project’s completion, usability and effectiveness of the system were evaluated with the System Usability Scale (SUS). This allowed assurances that the solution would be technically effective, and always used at an accessible, intuitive, and reliable level for users. Research has demonstrated that using these methodological practices has improved perceived quality in intended digital products and reported an SUS score of 81.75 after incorporating Lean UX into web interfaces redesign. Both results seem to suggest that these are effective ways to enhance usability and user satisfaction [7].

2. Related Work

2.1. Technologies Used with YOLO

Recent advances in computer vision have shown promise for automated PPE detection through several avenues. In the construction industry, where dynamic environments and a high density of manual labor present significant safety risks, effective PPE monitoring has become a major challenge. Although workers are often required to wear personal protective equipment such as hard hats and reflective clothing, in practice, these items are frequently neglected due to factors like low safety awareness, discomfort, or lack of supervision—resulting in severe safety hazards on construction sites [8].
The SH17 dataset is also useful because it contains 8099 images from 17 categories of PPE, designed for manufacturing environments. In addition, the solution uses lightweight variants of YOLO (v8-nano) to ensure smooth operation on standard industrial computers without the need for specialized hardware [9]. While this dataset could directly benefit the training process, it does not address how to create a comprehensive application with intuitive interfaces and real-time monitoring capabilities.
Agricultural applications based on YOLO offer important optimization lessons for a PPE detection system. A study achieved efficient weed detection on peripheral devices by incorporating Bidirectional Feature Pyramid Network (BiFPN) and LiteDetect into YOLOv8n. While its agricultural context differs from industrial safety, its parameter reduction techniques (which achieved a 63.8% reduction in parameters) directly influence an approach to maintaining model efficiency [10]. However, additional challenges remain when integrating these optimizations into a full desktop application that must manage dynamic manufacturing environments with variable lighting and worker movements.
Our proposed solution also leverages the YOLO framework, specifically version 8 (YOLOv8), due to its balance between accuracy and processing speed, which is crucial for real-time applications. To improve the model’s robustness and generalization capacity, we employ a custom dataset enhanced through data augmentation techniques. These augmentations simulate various real-world scenarios such as changes in lighting, occlusions, and worker positions, which are commonly encountered in manufacturing environments.
Table 1 presents success stories of the use of deep learning in YOLO-based industrial applications that validate the viability of this technology in the manufacturing sector and highlight key lessons for our project.

2.2. Lean UX in Industrial Applications

Recent studies highlight the effectiveness of user-centered approaches in demanding technical environments, demonstrating that iterative and collaborative design improves the user experience, aligning with Lean UX principles [11]. Likewise, another study showed that integrating Lean UX with agile frameworks such as Scrum allows design hypotheses to be validated in short and efficient cycles [12]. These approaches support the methodology by facilitating the development of applications tailored to the real needs of operators and supervisors, while the System Usability Scale (SUS) provides a reliable quantitative assessment of usability.
In contrast, the traditional Waterfall approach features rigid sequential phases with low adaptability, making it difficult to incorporate early feedback and often leading to costly rework in later phases. A recent comparative study found that projects based on agile methodologies have a success rate of 40%, compared to just 15% for those guided by the Waterfall approach, highlighting its limitations in changing and demanding environments such as the industrial sector [13].
For these reasons, our project is taking a Lean UX approach, which has the flexibility and user focus needed in high-velocity industries, combined with agile ways of working which allow for rapid and iterative testing of design ideas to obtain continuous operational feedback from operators and supervisors. This means that the project will better help gling recut the amount of work and future work, reducing the potential of an application being produced which meets an imagined operational need.
Table 2 compares traditional design methodologies with the Lean UX approach applied in industrial environments, highlighting how this methodology allows for more agile development cycles, greater focus on the end user, and more natural integration with quantitative assessments such as the System Usability Scale (SUS). This combination not only accelerates the iteration of functional prototypes but also ensures products with high levels of acceptance in contexts where resistance to change and resource constraints are common barriers.

2.3. Computer Vision in Industries

Research on railway security using YOLOv3 and Mask Regions with Convolutional Neural Networks (R-CNN) for intrusion detection shares the focus on security applications, but differs in its technical approach [11]. While they address challenges in open environments, such as low visibility, it is less focused on controlled industrial environments where regulatory compliance of PPE is critical. The use of the YOLOv8 architecture offers greater advantages in accuracy and speed than its implementation in YOLOv3, crucial for real-time PPE monitoring.
Agricultural disease detection systems using YOLOv8/v9 on drones and smartphones demonstrate the versatility of the framework [12]. However, to develop an application for medium or small industrial companies, we deliberately avoid the use of expensive hardware such as drones, opting for conventional webcams to maximize accessibility. We also prioritize data privacy through local processing, a crucial concern in industrial environments that is not addressed by its cloud-based approach.
Case studies in defect detection for engine components—such as pistons, cylinders, and crankshafts—within the manufacturing industry have shown that enhanced versions of YOLO, like DDSC-YOLOv5s and DBFF-YOLOv4, significantly improve the accuracy of detecting casting defects, machining irregularities, and assembly issues. These frameworks integrate advanced modules and hybrid approaches that combine deep learning with traditional computer vision to achieve precise inspection results in complex production environments. Such methods demonstrate the adaptability of deep learning models, particularly YOLO variants, in high-stakes manufacturing applications, and support their applicability in PPE monitoring tasks within dynamic industrial settings [13].
YOLOv8n’s specialized adaptation of MEAG-YOLO (Multi-scale Efficient Attention and Ghost convolution-YOLO) for electrical substations achieved a remarkable 98.4% accuracy in PPE detection [6]. The use of modules such as Multi-Scale Channel Attention (MSCA) and GhostConv could inspire future optimizations in our model. However, our main innovation lies not in the architecture of the model, but in presenting the technology as an accessible desktop application designed for manufacturing industries, bridging the gap between advanced machine vision and practical industrial implementation.
Given the precedents discussed, our proposed solution is specifically designed to address three critical challenges identified in previous studies: real-time monitoring, variable environmental conditions, and regulatory compliance. By leveraging the strengths of the YOLOv8 architecture, combined with practical deployment strategies such as the use of conventional webcams and local processing, our approach aims to ensure accurate, responsive, and compliant PPE detection in dynamic industrial settings.
Table 3 contrasts the typical challenges of manual PPE monitoring with the solutions offered by YOLOv8, emphasizing how a new approach can improve efficiency, reduce costs, and ensure real-time monitoring.

3. Methodology

In the development of technological solutions with machine vision, the adoption of agile methodologies is key to ensure the effectiveness and adaptability of the final product. A current study implemented Lean UX in the design of virtual reality environments to support the learning of students with Attention Deficit Hyperactivity Disorder (ADHD), which enabled fast, intuitive, and collaborative responses adapted to real needs [14]. Given this successful experience in complex technological contexts, it is proposed to apply Lean UX to design an application that detects the improper use of PPEs in workers in the manufacturing sector. Lean UX is composed of 3 stages: Think, Make, and Check.

3.1. Think

In Lean UX, there is first the “Think” phase, which focuses on thoroughly understanding the problem to be solved, formulating clear hypotheses to guide the design and development process. This approach recognizes that proper problem identification is critical to designing technology solutions that effectively respond to real needs. It is important to understand the difficulties experienced by workers in industrial workplaces, and the variables that impact their experiences of protective equipment, in any development, especially in contexts where safety and efficiency are paramount [15].
In this light, Lean UX focuses on the development of hypothesis and assumption tables. These tables allow us to clarify the context, needs, and behaviors the user is bringing, while allowing us to validate or invalidate these hypotheses via experimentation. Regardless of how we organize the data related to the user’s action, thinking, feelings, and perceptions, the empathy map is a very helpful model when we want to understand the user within a lean UX framework. This method has proven to be effective during recent research with indigenous tribes in participatory design contexts—to find out trends in experience and guide user-centered design [16].
Finally, this phase emphasizes the definition of an abstract design framework for the solution. This will also sort and prioritize the product backlog, which is to include the features and functions of product development. The product backlog will also use the Fibonacci system to rank and categorize the tasks in their backlog. According to the current study, projects using Lean UX have shown improved integration of engineering and risk management viewpoints that encourage better development of adaptable solutions to an industry with constant change in demands and regulation [15].

3.2. Make

In the “Make” phase of Lean UX, physical and logical architectures are designed. A physical architecture is concerned with distribution and implementation of components within the technological infrastructure; logical architecture is concerned with definition of organization and interactions. By combining the two, it ensures a well-organized system that will be easier to maintain, scale, and trust, which is particularly important in industrial contexts [17].
Interface design in this space must be intuitive and suited for the needs. During this phase, adopting something like C4 architecture (Context, Containers, Components, and Code) is very relevant and valuable. This model provides clear opportunities to visually understand how the system is designed, and how the various components are mapped out, which supports the collaboration between the design and development teams [18]. This gives great value in a dynamic environment when clarity is needed to navigate.
The stated ideas and concepts are then designed and prototyped with physical representations to test the hypotheses and the activities defined in the product backlog. Prototypes are an important method that allows for visibility into how the user interacts with technology, according to the cited article. Prototypes allow for faster user feedback, and better alignment between development and design teams, as well as acting as a mediator between requirements and creating technical decisions during the systems development process [19]. These prototypes allow for the testing of possible improvement to user interface and user experience before implementing final products.
In addition, user flows are a valuable tool and play an essential role in mapping user interactions. User flows are diagrams that visualize the steps users must take in order to complete specific tasks. User-centered visual representations, visual aids like user flows, not only bring clarity to processes, but also bring alignment between the systems design and users’ actual needs, leading to greater usability and technology acceptance [20].

3.3. Check

Lastly, the “Check” in Lean UX reflects assessing and validating solutions to be certain they function as intended and are useful. At this stage, users can provide direct feedback in order to validate your hypothesis deposition. Surveys are a great way to obtain information about your experience with the prototype, while also allowing to watch out for potential usability problems and areas for improvement. When you design your survey with well-crafted questions, evaluating the perceived functionality of the design and usability is effective, which helps with adjusting the design before it is implemented [21].
The user interface design principles proposed by Jakob Nielsen are a core component of this framework for usability measurement. The heuristics identified by Nielsen such as visibility of the system state match with the real world, and error prevention should be critical when designing any interface within which the user interacts and is indicative of an intuitive and safe interface. Using these heuristics in systems supports improving the user experience, prevents errors, and positively increases user acceptance of the system [22].
Finally, a very popular tool in the area of usability evaluation is the System Usability Scale (SUS). The SUS, developed by John Brooke, is a ten-question Likert-formatted questionnaire that provides a quantitative measure of users’ perception of usability [23]. In this report, the evaluation of an application designed for the manufacturing sector is used.
As shown in Table 4, the SUS is commonly used and is best known for its simplicity and for providing an overview of user experience from their perspective of system performance.

4. Results

We used the Lean UX approach described in the methodology, with details on the three stages Think, Make, and Check. In the Think stage, tools such as empathy maps, UX hypothesis building, and product backlog were used to thoroughly understand the users. In the Make stage, solutions were materialized through a coherent technical architecture and visual prototypes. Finally, in the Check stage, the user experience was validated using the SUS questionnaire. Each stage provided important information to develop a solution that met the real needs of the industrial context, offering an effective and satisfactory experience for users.

4.1. Think

To ensure a user-centered design, an empathy map was developed specifically focused on industrial supervisors, who will be the main users of the desktop application for the detection of personal protective equipment (PPE).
This tool, shown in Figure 1, allowed us to gain an in-depth understanding of their motivations, frustrations, needs, and the actual context in which they interact with technology. By identifying what they see, hear, think, feel, and do in their work environment, their perceptions revealed that they face difficulties due to the lack of effective tools to monitor and the complexity of current systems, which generates frustration and pressure to comply with safety regulations. This mapping was key to aligning the technology solution with users’ actual expectations and conditions.
Table 5 presents a series of key assumptions about user behavior and needs in relation to the platform, as well as the associated user experience (UX) hypotheses that will guide the design and development of the system. These assumptions arise from the initial understanding of the context of use and enable the formulation of hypotheses that can then be validated through testing, interviews or prototyping. The goal is to ensure that each design decision responds to real needs, thus improving efficiency, usability, and overall user satisfaction within the platform.
Table 6 shows the product backlog, which contains the main functionalities identified and graded in ascending order following the Fibonacci sequence, estimated based on the cost of creating each user story. This allows a clear and structured prioritization, facilitating the planning and efficient development of the project.

4.2. Make

Based on the hypotheses and the product backlog developed in the Thinking phase, we carry out the concrete design of the solution, giving shape to a tangible proposal that allows us to validate the initial ideas. This stage is divided into two fundamental components: architecture, which describes the technical organization of the system to guarantee its functionality and coherence, and prototypes, which allow us to visualize and evaluate the user’s interaction with the interface.

4.2.1. Architecture

The development of the physical and logical diagrams was essential to clearly structure and communicate the project architecture.
The physical diagram, shown in Figure 2, integrates the system components: from the camera that captures images, to the Python (v 3.8.6) desktop application that manages the video input, to the YOLOv8 machine vision model that detects breaches and stores the results in an SQLite database.
On the other hand, the logical diagram depicted in Figure 3 organizes the system into layers (client, presentation, business, and data), which allows visualizing the separation of responsibilities between modules such as dashboard, alerts, camera flow control, and data persistence. These diagrams are essential to ensure a consistent and scalable implementation, and reflect good software design practices aligned with the principles of a clean architecture.
Based on this architecture, it is planned to train the computer vision model using YOLOv8, relying on a dataset containing 44,000 images, which show people using and not using PPE. This dataset will be divided into 70% for training, 20% for validation, and 10% for testing, following good practices in the development of deep learning models. With this configuration, it is expected to obtain an average mAP (mean Average Precision) higher than 0.80, which would reflect a high accuracy in the automatic detection of PPE in real-world industrial environments, even with variations in illumination, position or type of camera used.
In our project, the use of the C4 model (Context, Container, Component, Code) was key to clearly visualize the system architecture at different levels. The context diagram, shown in Figure 4, allowed us to identify and communicate in a simple way the relationship between the main stakeholders and the systems involved. This high-level view is critical for any stakeholder to understand the purpose and scope of the system without the need for deep technical knowledge.
In Figure 5, the PPEYE system was divided into several technology containers to ensure a modular and scalable architecture. These include the detection container with YOLOv8, the real-time alerts container, the history container for tracking, the configuration container for customized management, and the database container (SQLite). This separation facilitates maintenance, reuse, and efficient deployment of the system.
The component diagram, shown in Figure 6, provides a more detailed view of the internal modules that make up each technological container, allowing a better understanding of their structure and communication between them. New components are incorporated in this view, such as the Processing Component, in charge of processing in real time the images captured by the webcam; the Parameters Component, which defines the PPE detection criteria according to the supervisor’s configuration; and the Dashboard Component, which presents a statistical summary based on historical data. This precise division facilitates maintenance, improves the traceability of the information flow, and allows for more accurate configuration.
Finally, in Figure 7, the class diagram shows how the main classes of the system interact to detect and manage PPE alerts. The user configures the system through the Config class, displays the Dashboard and receives the alerts. Detection is performed by DetectionService, which analyzes the images captured by the camera using the parameters defined in Config. If a violation is detected, an alert is generated and stored in the history. The user can query this history to view alerts and statistics. The diagram reflects a clear object-oriented architecture, where each class has specific responsibilities that allow a structured flow from image capture to results display.

4.2.2. Prototypes

For prototype design, icons facilitate an intuitive and accessible interface, key in dynamic work environments that require quick response. As shown in Figure 8, conventional symbols such as the bell (notifications), eye (visualization), and pencil (editing) were used, allowing immediate understanding without the need for text.
In addition, a clear and intuitive visual design is key to a quick user response. As shown in Figure 9, a sober and safe color palette, in line with the manufacturing industry, was chosen to reinforce confidence, make alerts easier to read, and improve visibility. This visual consistency optimizes the supervisors’ monitoring experience.
In the first view, as shown in Figure 10, it presents a clear and functional interface that combines an image of the industrial environment with a simple access form, which reinforces the context of use of the application. The design prioritizes readability and accessibility, with well-delimited fields and a prominent action button that facilitates quick access for the user.
In the application, we opted for the Z pattern, a design model that guides the user’s gaze naturally and efficiently. In the Home view, as shown in Figure 11, it features a navigation bar at the top, key for scrolling within the application. This space contains the names of the main views, ensuring that the user can quickly access the different functionalities of the application. The central content of the view is dedicated to the camera with the PPE detection system, allowing the user to see in real time the security status of the area. Just below, an option is provided to switch between the webcams linked to the application. Finally, on the right, a list of alerts generated by the detection system is displayed, with the characteristic color configured for the different PPE, allowing the user to quickly review any incident.
The Dashboard view, as shown in Figure 12, provides a detailed overview of PPE monitoring over time. At the top, a drop-down menu allows selecting the month of interest. On the left, a pie chart shows alerts of workers without PPE, segmented by equipment type. On the right, there are two additional graphs: a pie chart showing how many workers without PPE were detected by the different cameras registered in the application and a bar chart detailing the number of workers without PPE by week in the selected month. This temporal breakdown allows you to identify patterns and make safety decisions, providing both an overview and a more detailed analysis.
The History view, as shown in Figure 13, presents a detailed record of the alerts generated by workers without PPE registered in the database. This interface is designed to provide a clear and organized display of each event, including the description of the alert, the camera that captured it, and the exact date and time. The tabular design, along with the ability to sort data by camera name or date, responds to the need for supervisors to quickly access critical information, facilitating the identification of non-compliance and timely intervention in specific areas.
In the Config view illustrated in Figure 14, users benefit from a multi-level customization panel enabling them to configure the system to the specific requirements of their different environments. Using the “Safety Alerts” user panel, supervisors can elect which optimum equipment (e.g., vests, helmets, etc.) that they want the system to monitor thus allowing a modular and scalable response depending on operational context. In addition, the “Alert Frequency” option can be configured to send an alert every 1 to 5 min, avoiding alert saturation while sending alerts as per task loading, and the “Alert Colors” option enables users to configure the colors allocated to each different type of alert, improving visual identification and lowering user task burden. This allows for flexible configuration based on the empathic design approach of Lean UX, allowing each user to configure how to control their alerts while adapting to their workflows and actual monitoring needs.
The Profile view, seen in Figure 15, creates a straightforward and understandable environment for the user to control their own information within the system. The Profile view allows the user to edit key information such as their name, email, and password, and also change their profile picture—again creating a more unique and recognizable interaction. The visual design is consistent with basic user experience (UX) standards, such as visibility of system status and editing directly using an icon next to each field.
The Alerts view, as shown in Figure 16, displays real-time events about workers not wearing their PPE, indicating the time, a clear description, and a customized color according to the type of alert, previously configured by the user. Its design responds to user experience (UX) principles, such as visual clarity, user control, and operational efficiency, ensuring an immediate response to risk situations.

4.3. Check

In this project, SUS was used as the main tool in the verification phase, providing a reliable and standardized measurement of the user experience with the application prototype. The evaluation was carried out with 50 people who perform prevention tasks in manufacturing companies to gather the necessary information. They interacted with the prototype and completed the SUS questionnaire. The results obtained allowed us to identify areas for improvement in the user interface, enabling us to make significant changes to optimize the experience.
To perform the evaluation, users were invited to work with the prototype in Figma and answer the SUS questionnaire. The questions addressed aspects such as user confidence, ease of learning, system consistency, and perceived complexity. The average score obtained was 87.6 out of 100, which, according to the interpretation of the SUS scale, is classified as “excellent” usability. This result confirms that the design of the application is highly intuitive, accessible, and satisfactory for the end user.
As a complement to the overall score, a stacked horizontal bar chart was generated that visualizes the percentage distribution of the responses to each question in the questionnaire, as shown in Figure 17. This graph allows us to observe in detail the users’ perception of individual aspects of the system. The low proportion of negative responses suggests that there are no critical areas that compromise the user experience. This visualization not only quantitatively supports the score obtained, but also provides clear visual information to detect specific patterns, strengths, and possible opportunities for improvement in the evaluated interface.

5. Discussion

According to the results obtained on the SUS scale, which was 87.6, it was classified as “excellent”. When comparing the presented solution with a management application, but for emotion management from a reference article, whose value was 86 [24], although the difference is subtle, it reinforces the effectiveness of the user-centered approach applied in our design, especially in aspects such as intuitive interface, alert customization, and local data processing. This appears to have positively influenced the users’ perception of efficiency and control, suggesting a competitive advantage in industrial settings.
Nevertheless, there are some limitations to our work that should not be overlooked. One of the limitations was the size of the surveyed SUS participants for the evaluation, as we would have liked to cover more evaluators to gain even more consensus-based opinions. In addition, factors such as the quality of the webcams used and the minimum amount of processing power required to run the detection locally could both add significant variability to system performance and rendered user experiences. These limitations would be factored into any future improvements of the application.
Likewise, when comparing our SUS score with that obtained in the study where an intelligent health monitoring system was evaluated, an average of 51.25 was obtained and, according to the SUS scale, it was 0.911, where a more marked difference is evident that reaffirms the positive impact of key decisions [25]. The use of clear visual components, optimized navigation flows, and an adaptive approach to the actual conditions of the environment seem to have contributed to a more satisfactory user experience. This comparison also highlights opportunities for improvement for future versions.

6. Conclusions

In conclusion, the design of the application was focused from the beginning on the end user, the supervisors who need quick responses and easy-to-use digital environments. Through the empathy map and product backlog, key needs such as simplicity, speed, and adaptability to the work environment were identified. These tools allowed us to translate frustrations and expectations into concrete UX decisions that guided the entire proposal.
Consequently, the system architecture was conceived with modularity and scalability criteria. The use of the C4 model made it possible to represent each level of the system, from the general context to the internal components. The separation into containers and the use of logical, physical, and class diagrams ensured a clear and adaptable structure, in line with good software design practices.
In addition, based on the assumptions and product backlog defined, the functionalities of the prototype were established. These instruments allowed us to validate icons; have a clear visual hierarchy of elements, and sobriety of color that enhance interaction; each screen was aimed to specific aims, enabling efficient interaction in the user, while also ensuring visual and functional coherence throughout the system.
In contrast, in order to validate the usability of the prototype, the System Usability Scale (SUS) was employed with 50 users populated in the industrial environment and yielded an average score of 87.6 out of 100. This would be considered excellent and indicates that the application’s design and function is both intuitive, reliable, and efficient for its intended users. Apart from these positive results, the testing process showed other possible future improvements including adjustments to the interface to accommodate low-visibility conditions, adjusting model accuracy to the face of PPE variations, and potentially adding language functionalities.
These possibilities will give the product the opportunity to scale, develop greater value in alternative industrial contexts, and keep focused on the user. This research is part of a multi-stage thesis project; it successfully contributes to addressing the challenge of safety in manufacturing by applying and iteratively demonstrating how user-centric machine vision-based solutions can effectively address this issue.

Author Contributions

L.A.T.-L. and R.A.R.-G. contributed to the conceptualization of the study. L.A.T.-L. led the methodology design, software development, formal analysis, and original draft preparation. R.A.R.-G. supported the investigation, validation, visualization, and contributed to the review and editing of the manuscript. J.C.M.-A. supervised the research process, provided continuous feedback and revisions throughout the development of the manuscript, and was responsible for the selection of the journal and coordination of the publication process. All authors have read and agreed to the published version of the manuscript.

Funding

This research received financial support from the “Dirección de Investigación de la Universidad Peruana de Ciencias Aplicadas” through the UPC-EXPOST-2025 incentive program.

Data Availability Statement

This study includes data collected through the System Usability Scale (SUS) questionnaire, administered to participants for usability evaluation purposes. These data are not publicly available to protect participant confidentiality, as prior consent to share personal information was not obtained. However, anonymized data may be provided upon reasonable request and with appropriate authorization.

Acknowledgments

The authors would like to thank the “Dirección de Investigación de la Universidad Peruana de Ciencias Aplicadas” for the support provided to carry out this research work through the UPC-EXPOST-2025 incentive.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ahmed, M.I.B.; Saraireh, L.; Rahman, A.; Al-Qarawi, S.; Mhran, A.; Al-Jalaoud, J.; Al-Mudaifer, D.; Al-Haidar, F.; AlKhulaifi, D.; Youldash, M.; et al. Personal Protective Equipment Detection: A Deep-Learning-Based Sustainable Approach. Sustainability 2023, 15, 13990. [Google Scholar] [CrossRef]
  2. Lo, J.-H.; Lin, L.-K.; Hung, C.-C. Real-Time Personal Protective Equipment Compliance Detection Based on Deep Learning Algorithm. Sustainability 2022, 15, 391. [Google Scholar] [CrossRef]
  3. Pisu, A.; Elia, N.; Pompianu, L.; Barchi, F.; Acquaviva, A.; Carta, S. Enhancing Workplace Safety: A Flexible Approach for Personal Protective Equipment Monitoring. Expert Syst. Appl. 2024, 238, 122285. [Google Scholar] [CrossRef]
  4. Jin, Z.; Gambatese, J.; Karakhan, A.; Nnaji, C. Analysis of Prevention through Design Studies in Construction: A Subject Review. J. Safety Res. 2023, 84, 138–154. [Google Scholar] [CrossRef] [PubMed]
  5. Vukicevic, A.M.; Petrovic, M.; Milosevic, P.; Peulic, A.; Jovanovic, K.; Novakovic, A. A Systematic Review of Computer Vision-Based Personal Protective Equipment Compliance in Industry Practice: Advancements, Challenges and Future Directions. Artif. Intell. Rev. 2024, 57, 319. [Google Scholar] [CrossRef]
  6. Zhang, H.; Mu, C.; Ma, X.; Guo, X.; Hu, C. MEAG-YOLO: A Novel Approach for the Accurate Detection of Personal Protective Equipment in Substations. Appl. Sci. 2024, 14, 4766. [Google Scholar] [CrossRef]
  7. Rahmawati, S.D.; Prasetiyo, B. Application of Lean UX and System Usability Scale (SUS) Methods in Redesigning User Interface and User Experience on Adella Hospital Online Registration Website. J. Adv. Inf. Syst. Tech. 2025, 6, 200–218. [Google Scholar] [CrossRef]
  8. Yipeng, L.; Junwu, W. Personal Protective Equipment Detection for Construction Workers: A novel dataset and enhanced YOLOV5 approach. IEEE Access 2024, 12, 47338–47358. [Google Scholar] [CrossRef]
  9. Ahmad, H.M.; Rahimi, A. SH17: A Dataset for Human Safety and Personal Protective Equipment Detection in Manufacturing Industry. J. Saf. Sci. Resil. 2025, 6, 175–185. [Google Scholar] [CrossRef]
  10. Wang, J.; Qi, Z.; Wang, Y.; Liu, Y. A Lightweight Weed Detection Model for Cotton Fields Based on an Improved YOLOv8n. Sci. Rep. 2025, 15, 457. [Google Scholar] [CrossRef] [PubMed]
  11. Gupta, R.; Jansen, N.; Regnat, N.; Rumpe, B. Design Guidelines for Improving User Experience in Industrial Domain-Specific Modelling Languages. In Proceedings of the 25th International Conference on Model Driven Engineering Languages and Systems: Companion Proceedings, Montreal, QC, Canada, 23–28 October 2022; ACM: New York, NY, USA, 2022. [Google Scholar]
  12. Alhammad, M.M.; Moreno, A.M. Integrating User Experience into Agile: An Experience Report on Lean UX and Scrum. In Proceedings of the ACM/IEEE 44th International Conference on Software Engineering: Software Engineering Education and Training, Pittsburgh, PA, USA, 21–29 May 2022; ACM: New York, NY, USA, 2022. [Google Scholar]
  13. Islam, M.R.; Zamil, M.Z.H.; Rayed, M.E.; Kabir, M.M.; Mridha, M.F.; Nishimura, S.; Shin, J. Deep learning and computer vision techniques for enhanced quality control in manufacturing processes. IEEE Access 2024, 12, 121449–121479. [Google Scholar] [CrossRef]
  14. Cardona-Reyes, H.; Muñoz-Arteaga, J.; Villalba-Condori, K.; Barba-González, M.L. A Lean UX Process Model for Virtual Reality Environments Considering ADHD in Pupils at Elementary School in COVID-19 Contingency. Sensors 2021, 21, 3787. [Google Scholar] [CrossRef] [PubMed]
  15. Cornide-Reyes, H.; Duran, C.; Baltierra, S.; Silva-Aravena, F.; Morales, J. Improving UX in Digital Transformation Projects through Lean Principles. In Social Computing and Social Media; Springer Nature Switzerland: Cham, Switzerland, 2024; pp. 161–178. ISBN 9783031612800. [Google Scholar]
  16. Sam, J.; Richardson, C.G.; Currie, L.M. Application of Two-Eyed Seeing in Adolescent Mental Health to Bridge Design Thinking and Indigenous Collective Storytelling. Int. J. Environ. Res. Public Health 2022, 19, 14972. [Google Scholar] [CrossRef] [PubMed]
  17. Krog, J.; Akbas, C.; Nolte, B.; Vietor, T. Development of a Functional and Logical Reference System Architecture in Automotive Engineering. Systems 2025, 13, 141. [Google Scholar] [CrossRef]
  18. Brown, S. Software Architecture for Developers, 2nd ed.; Leanpub: Birmingham, UK, 2014. [Google Scholar]
  19. Bjarnason, E.; Lang, F.; Mjöberg, A. An Empirically Based Model of Software Prototyping: A Mapping Study and a Multi-Case Study. Empir. Softw. Eng. 2023, 28, 115. [Google Scholar] [CrossRef]
  20. Merritt, K.; Zhao, S. An Innovative Reflection Based on Critically Applying UX Design Principles. J. Open Innov. 2021, 7, 129. [Google Scholar] [CrossRef]
  21. Weichbroth, P. Usability Testing of Mobile Applications: A Methodological Framework. Appl. Sci. 2024, 14, 1792. [Google Scholar] [CrossRef]
  22. Nielsen, J. Usability Engineering; Academic Press: San Diego, CA, USA, 1993; pp. 71–114. [Google Scholar]
  23. Brooke, J. SUS: A ‘quick and dirty’ usability scale. In Usability Evaluation in Industry; Jordan, P.W., Thomas, B., Weerdmeester, B.A., McClelland, I.L., Eds.; Taylor & Francis: London, UK, 1996; pp. 189–194. [Google Scholar]
  24. Mondragón Bernal, I.F.; Lozano-Ramírez, N.E.; Puerto Cortés, J.M.; Valdivia, S.; Muñoz, R.; Aragón, J.; García, R.; Hernández, G. An Immersive Virtual Reality Training Game for Power Substations Evaluated in Terms of Usability and Engagement. Appl. Sci. 2022, 12, 711. [Google Scholar] [CrossRef]
  25. Tan, J.J.Y.; Ismail, M.N.; Md Fudzee, M.F.; Jofri, M.H.; Nordin, M.A.H. Development of Mobile Application for Enhancing Emotion Management for UTHM Students Using DASS-21. J. Adv. Res. Des. 2025, 129, 1–19. [Google Scholar] [CrossRef]
Figure 1. Empathy map.
Figure 1. Empathy map.
Computers 14 00312 g001
Figure 2. Physical diagram of the proposed PPEYE application (name of the application, which is a play on words between PPE and “eye”).
Figure 2. Physical diagram of the proposed PPEYE application (name of the application, which is a play on words between PPE and “eye”).
Computers 14 00312 g002
Figure 3. Logical diagram from PPEYE.
Figure 3. Logical diagram from PPEYE.
Computers 14 00312 g003
Figure 4. Context diagram.
Figure 4. Context diagram.
Computers 14 00312 g004
Figure 5. Content diagram.
Figure 5. Content diagram.
Computers 14 00312 g005
Figure 6. Component diagram.
Figure 6. Component diagram.
Computers 14 00312 g006
Figure 7. Class diagram.
Figure 7. Class diagram.
Computers 14 00312 g007
Figure 8. PPEYE icons.
Figure 8. PPEYE icons.
Computers 14 00312 g008
Figure 9. PPEYE color palette.
Figure 9. PPEYE color palette.
Computers 14 00312 g009
Figure 10. Login view.
Figure 10. Login view.
Computers 14 00312 g010
Figure 11. Home view.
Figure 11. Home view.
Computers 14 00312 g011
Figure 12. Dashboard view.
Figure 12. Dashboard view.
Computers 14 00312 g012
Figure 13. History view.
Figure 13. History view.
Computers 14 00312 g013
Figure 14. Config view.
Figure 14. Config view.
Computers 14 00312 g014
Figure 15. Profile view.
Figure 15. Profile view.
Computers 14 00312 g015
Figure 16. Alerts view.
Figure 16. Alerts view.
Computers 14 00312 g016
Figure 17. Bar graph of the SUS responses.
Figure 17. Bar graph of the SUS responses.
Computers 14 00312 g017
Table 1. Successful applications of deep learning in industry.
Table 1. Successful applications of deep learning in industry.
ApplicationMethodResultsLessons for Your Project
PPE detectionClip2Safety (YOLO-World + GPT-4o)79.7% accuracy in attributesAvoid cloud dependency
Follow-upYOLOv8 + SAM95.61% MOTA in multi-chambersPre-processing techniques (e.g., histogram equalization) applicable to your system
OptimizationMEAG-YOLO (YOLOv8n)98.4% mAP in substationsUsing lightweight modules (GhostConv) for CPUs
Table 2. Comparison of design methodologies in industrial environments.
Table 2. Comparison of design methodologies in industrial environments.
CriteriaTraditional DesignLean UX + SUS
Iteration timeSlowFast
Design approachRequirements basedUser centered
Usability evaluationInformalSUS Scale
Adaptability to changeLowHigh
Table 3. Advantages of YOLOV8 over traditional methods.
Table 3. Advantages of YOLOV8 over traditional methods.
Industrial ChallengeTraditional SolutionLimitationsBenefit
Real-time monitoringCameras + human supervisorsCostly, error proneCost reduction and increased reliability
Variable conditionsRandom inspectionLimited coverageContinuous detection without gaps
Regulatory complianceManual recordsRisk of counterfeitingTransparent audits
Table 4. Questions for the evaluation of the SUS.
Table 4. Questions for the evaluation of the SUS.
Questions
1I would like to use this system frequently.
2The system is unnecessarily complex.
3I found the system easy to use.
4I think I would need the support of a technician to be able to use this system.
5The various functions of the system are well integrated.
6There is too much inconsistency in this system.
7I think most people would learn to use this system very quickly.
8I find the system very cumbersome to use.
9I felt safe using the system.
10I needed to learn a lot of things before I could use the system.
Table 5. Assumptions and hypotheses.
Table 5. Assumptions and hypotheses.
AssumptionsHypotheses
The user needs to act quickly in critical situations.If we design direct action flows with quick access and clear visual hierarchy, the user will be able to register events in an agile and efficient way.
The user works in multiple windows or tabs simultaneously.If we offer a modular interface, we facilitate the simultaneous handling of tasks and the comparison of information in real time.
Technical information can be dense and complex.If we structure the information with hierarchical design, intelligent filters, and clear visualization, we will reduce the user’s cognitive load.
The user expects to see evidence of recorded work quickly.If we enable immediate uploading of images, files or logs from the browser, we will increase confidence in the system and reduce management times.
The user needs reports ready to communicate results.If we implement automated exports and customizable dashboards, we will improve productivity and reporting capabilities.
Table 6. Product backlog.
Table 6. Product backlog.
CodeTitleValue (1/2/3/5/8)
US001Integrate Yolov8 for PPE detection8
US002Integrate OpenCV for real-time capture5
US003Train a model for PPE detection8
US004Visual interface to see when a worker is not wearing PPE5
US005Quick identification of missing PPE detected on screen5
US006Receive alerts when a worker is without PPE5
US007Keep a history of alerts3
US008Organize alerts3
US009Display detection graphs for the month5
US010Configure alert colors2
US011Review detection statistics3
US012Configure profile and frequency1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Trujillo-Lopez, L.A.; Raymundo-Guevara, R.A.; Morales-Arevalo, J.C. User-Centered Design of a Computer Vision System for Monitoring PPE Compliance in Manufacturing. Computers 2025, 14, 312. https://doi.org/10.3390/computers14080312

AMA Style

Trujillo-Lopez LA, Raymundo-Guevara RA, Morales-Arevalo JC. User-Centered Design of a Computer Vision System for Monitoring PPE Compliance in Manufacturing. Computers. 2025; 14(8):312. https://doi.org/10.3390/computers14080312

Chicago/Turabian Style

Trujillo-Lopez, Luis Alberto, Rodrigo Alejandro Raymundo-Guevara, and Juan Carlos Morales-Arevalo. 2025. "User-Centered Design of a Computer Vision System for Monitoring PPE Compliance in Manufacturing" Computers 14, no. 8: 312. https://doi.org/10.3390/computers14080312

APA Style

Trujillo-Lopez, L. A., Raymundo-Guevara, R. A., & Morales-Arevalo, J. C. (2025). User-Centered Design of a Computer Vision System for Monitoring PPE Compliance in Manufacturing. Computers, 14(8), 312. https://doi.org/10.3390/computers14080312

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop