Next Article in Journal
Few-Shot Leukocyte Classification Algorithm Based on Feature Reconstruction Network with Improved EfficientNetV2
Previous Article in Journal
Structural Analysis for Earthquake-Resistant Design of Buildings: Closing Editorial
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Smart Manufacturing Workflow for Fuse Box Assembly and Validation: A Combined IoT, CAD, and Machine Vision Approach

by
Carmen-Cristiana Cazacu
1,*,
Teodor Cristian Nasu
2,
Mihail Hanga
2,
Dragos-Alexandru Cazacu
3 and
Costel Emil Cotet
1
1
Robots and Production Systems Department, The Faculty of Industrial Engineering and Robotics, National University of Science and Technology POLITEHNICA Bucharest, Splaiul Independenței 313, 060041 Bucharest, Romania
2
Department of Machine Construction Technology, The Faculty of Industrial Engineering and Robotics, National University of Science and Technology POLITEHNICA Bucharest, Splaiul Independenței 313, 060041 Bucharest, Romania
3
Education Team, PTC Eastern Europe SRL, Splaiul Independenței 319, 060044 Bucharest, Romania
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(17), 9375; https://doi.org/10.3390/app15179375
Submission received: 2 July 2025 / Revised: 13 August 2025 / Accepted: 24 August 2025 / Published: 26 August 2025

Abstract

This paper presents an integrated workflow for smart manufacturing, combining CAD modeling, Digital Twin synchronization, and automated visual inspection to detect defective fuses in industrial electrical panels. The proposed system connects Onshape CAD models with a collaborative robot via the ThingWorx IoT platform and leverages computer vision with HSV color segmentation for real-time fuse validation. A custom ROI-based calibration method is implemented to address visual variation across fuse types, and a 5-s time-window validation improves detection robustness under fluctuating conditions. The system achieves a 95% accuracy rate across two fuse box types, with confidence intervals reported for statistical significance. Experimental findings indicate an approximate 85% decrease in manual intervention duration. Because of its adaptability and extensibility, the design can be implemented in a variety of assembly processes and provides a foundation for smart factory systems that are more scalable and independent.

1. Introduction

In the era of Industry 4.0 and the emerging Industry 5.0 paradigm, the convergence of digital technologies such as the Internet of Things (IoT), Computer-Aided Design (CAD), and computer vision is enabling the development of intelligent, flexible, and adaptive manufacturing workflows.
This manuscript significantly extends our previous work [1], which focused on optimizing the robotic fuse insertion process using API (Application Programming Interface)-driven digital twins. In contrast, the current study presents an integrated smart manufacturing workflow that includes real-time orchestration through CAD-IoT interoperability and vision-based validation. The robotic execution details have been deliberately excluded here to avoid redundancy, as they were thoroughly covered in [1]. We implement both a one-time manual ROI setup and an automatic CAD-driven ROI extraction; the latter is used by default in our workflow.

2. Related Work

Real-time production system monitoring, control, and optimization are made possible by the Internet of Things (IoT) paradigm [2,3], which also ensures connectivity and interoperability among various industrial systems to provide scalable automation [4]. Creating responsive and robust manufacturing environments requires dynamic modeling, performance prediction, and operational synchronization, all of which are made possible by a digital twin, which is a real-time digital version of a physical entity updated via sensor and IoT data streams [5,6]. Discrete manufacturing benefits greatly from its connection with IIoT platforms, which enable fast feedback mechanisms and closed-loop control [7]. Moreover, the use of cloud-based CAD programs like Onshape improves the capabilities of digital twins by making model-based representations available at all organizational levels [8].
Cyber–physical systems (CPS), digital twins, and machine vision are all integrated within Industry 4.0 frameworks, as demonstrated by recent developments in smart manufacturing systems. A comprehensive digital twin architecture was proposed by Bécue et al. [9] with the aim of enhancing optimization and operational resilience in future factories, while Huang et al. [10] offered a detailed analysis of AI-driven digital twins, highlighting their significance in intelligent robotics and smart industrial applications. CPS remains a key component, as shown by Ryalat et al. [11], who offered a smart factory model integrating CPS and IoT principles for effective Industry 4.0 deployment. To support this, Cazacu et al. [1] demonstrated the effectiveness of API-based digital twins in expediting wiring box assembly processes, emphasizing the importance of a smooth integration between CAD systems and execution environments.
To enhance quality assurance, Bhandari and Manandhar [12] explored CAD–computer vision integration for precise 3D model reconstruction. Rahman et al. [13] presented a cloud-based CPS for remote additive manufacturing, indicating the growing feasibility of distributed manufacturing systems. Abbas et al. [14] developed a safety monitoring system using computer vision and depth sensors, extending the application scope beyond assembly quality control.
Furferi and Servi [15] demonstrated the use of machine vision for precise color classification, relevant to complex textures such as wool textiles. Yang et al. [16] employed a graph neural network for intrusion detection in the Industrial Internet of Things (IIoT) to increase cyber-resilience in manufacturing networks. Liu et al. [17] reviewed IIoT trends and implementation technologies, while Dhanda et al. [18] examined the challenges of human–robot collaboration in the context of Industry 5.0. Matheson et al. [19] offered more details on human–robot interaction in the manufacturing sector.
The Internet of Things, smart automation, and digital twin technologies are used to create intelligent manufacturing systems. Yang et al.’s comprehensive analysis of IoT in smart manufacturing [20] emphasizes the need for data security, scalability, and interoperability in addition to real-time sensing and decision-making for the development of smart factory infrastructure. According to Tao et al. [21], our suggested system’s fuse validation procedures can be greatly improved by the real-time feedback and predictive analytics made possible by IoT-based, data-driven architectures.
Vilanova et al. [22] highlight the human-centric paradigm in smart manufacturing, underscore the transition to collaborative intelligence in Industry 5.0, and advocate for human–robot interaction, wherein adaptable robotic decision-making enhances human supervision. In electromechanical environments, Sun et al. [23] illustrate that the integration of intelligent algorithms with digital twin frameworks facilitates real-time simulation at the system architecture level, enhancing fault detection and configuration control—elements that are especially vital for high-precision operations such as fuse box assembly.
Calianu et al. [24] underlines the importance of modularity and dynamic reconfiguration in the development of an adaptive data gathering system utilizing IoT nodes for industrial sensing platforms. This methodology aligns with our modular architecture, which consolidates data collected from CAD and visual systems for synchronized processing through an Internet of Things hub. In an in depth study of cloud manufacturing and Industry 4.0 approaches, Zhong et al. [25] accentuated the need for cloud-based control systems and decentralized data processing, particularly in production environments with dynamic configurations. This demonstrates the importance of incorporating the Onshape and ThingWorx platforms into our architecture to facilitate real-time synchronization between digital and physical models.
In summary, recent literature shows that combining cloud-based CAD (digital twins), IIoT connectivity, and computer vision creates powerful smart workflows for assembly. Each component—IoT, CAD models, robots, and computer vision—feeds into a unified Industry 5.0 architecture that supports real-time orchestration and traceability. Prior studies of wiring-box assembly (e.g., using Onshape CAD and API-based digital twins) demonstrated the viability of this approach [1]. Building on that, the current trend is to close the loop: every action of the robot is monitored by computer vision, synced through the IIoT platform, and reflected in the virtual model. Such integration not only automates validation but also enables predictive and adaptive control, fulfilling the promise of intelligent, flexible manufacturing

3. Methodology

This article builds upon the previous work by Cazacu et al. [1], which detailed the fuse assembly process in wiring boxes. In actual research, the aim is to provide a proof-of-concept for the entire smart manufacturing workflow, starting with the selection of the wiring box type, followed by the automated transmission of assembly instructions to the robot, and concluding with the validation and quality inspection of the final assembly.
The previous method [1] allowed automated fuse insertion using static geometric data with minimal runtime validation or feedback by establishing a unidirectional connection between the collaborative robot and the CAD model (Onshape). The present methodology in this study is greatly enhanced by the ThingWorx IoT platform, which permits real-time monitoring, synchronization, and bidirectional communication between virtual and physical systems. The digital twin is now responsible for execution, pre-assembly configuration, and post-assembly validation. The solution enables the operator to choose the type of fuse box via an Internet of Things dashboard. Upon independently acquiring the relevant CAD data, the robot modifies its insertion technique. The vision module detects anomalies including misaligned robots, sensor drift, and improperly orientated or labeled fuses during operation. ThingWorx relays the feedback. Upon identifying inconsistencies, the digital twin modifies its internal state and initiates corrective measures, including halting the procedure or altering the robot’s trajectory.
Following the assembly, a validation process employing computer vision is commenced. The camera evaluates each fusion using color segmentation (HSV), region of interest (ROI) matching, and threshold logic. The outcomes are juxtaposed with the anticipated CAD configuration, and the assembly is categorized as successful or unsuccessful. Upon obtaining this decision through the IoT dashboard, the detection, execution, and validation cycle are complete. This closed-loop architecture aligns with the fundamental principles of Industry 4.0 and 5.0, evolving the system from a static automation script into a fully responsive, self-monitoring intelligent assembly cell.
This research outlines the development of a unique interface on the ThingWorx platform that enables users to select the suitable type of wiring box for assembly. The robot can independently identify the chosen wiring box and execute the corresponding assembly program by merging the ThingWorx environment with a collaborative robotic arm and a cloud-based CAD system (Onshape). The system initiates an automatic quality inspection phase based on a computer vision module following the installation of the fuses.
Each fuse box design was subjected to 100 independent trials, resulting in 200 test samples to evaluate the system’s performance. To emulate regulated manufacturing conditions, all experiments were conducted with fixed overhead LED panels and uniform artificial illumination. To reduce background noise and reflections, the robot worked in a small area. Binary classification was used to evaluate the visual validation’s accuracy in identifying the type of fuse and where it was placed for each ROI. When a detection’s color and position matched the CAD requirements, it was deemed accurate. Based on the total number of accurate classifications across all test instances, a final accuracy of 95% was achieved. We calculated the 95% confidence interval using the Wilson score method to assess statistical robustness, thereby ensuring the reliability of the findings presented.
Manual-intervention time (MIT) is defined as operator-active time per cycle (recipe selection, visual checks, confirmations), excluding fixture load/unload and machine-only operations. MIT values were derived from ThingWorx timestamps and verified by stopwatch. We summarize each condition by the median across repeated cycles and compute the relative reduction as:
M I T   ( % ) = 100   × ( 1   M I T a u t o M I T m a n u a l )
This intelligent production line is adaptable for various discrete assembly tasks using modular components and configurable logic; however, validation to date has primarily concentrated on fuse box scenarios.

4. Proof of Concept

4.1. System Architecture Overview

The system architecture depicted in Figure 1 incorporates four crucial components: ThingWorx (9.3.16) is an Internet of Things platform that allows users to select the type of fuse box and start assembling it by synchronizing data between the virtual model and physical execution; (1) a collaborative robot that gets instructions from ThingWorx (2) and automatically inserts fuses according to the CAD configuration; (3) a camera module that takes pictures of the assembled fuse box and uses HSV-based segmentation to perform automated visual validation; and (4) Onshape (SaaS Software, cloud SaaS, no user-visible versioning due to continuous deployment), a cloud-based CAD platform that models fuse boxes with accurate fuse types and locations. Technology provides full automation of assembly, enables real-time synchronization between the CAD model and robotic execution via a digital twin framework, and improves detection reliability through statistical validation at 5-s intervals. Its modular and extensible architecture enables straightforward adaptation to many assembly tasks beyond fuse boxes. The system design of this study demonstrates the connection between the computer vision module, an IoT orchestration layer (ThingWorx 9.3.16), a collaborative robot (Cobot), and the Onshape CAD platform. The architecture enables closed-loop validation and real-time data synchronization for fuse box construction operations. To verify precise fuse placement, the computer vision technology integrated into this workflow relies on image capture and processing. Several key Python libraries were used for implementation: matplotlib (v3.10.1) for development debugging and visualization; pupil_apriltags (v1.0.4.post11) for precise localization using fiducial markers; NumPy (v2.2.5) for numerical computations; and OpenCV (v4.11.0.86) for image processing and HSV masking. All algorithms were created and implemented using Python 3.12.2 to guarantee module compatibility and enable efficient automation within the ThingWorx-integrated environment.
The comprehensive smart manufacturing workflow proposed in this study is illustrated in Figure 2. This illustrates the automation of the fuse assembly and validation process with the integration of the CAD environment, IoT platform, digital twin, collaborative robot, and computer vision module. The process begins with the selection of the wiring box model and concludes with a decision node that assesses the need for rework based on visual examination. This closed-loop architecture facilitates adaptive control and traceability throughout the manufacturing cycle, ensuring real-time synchronization.

4.2. Onshape CAD Models

Onshape is utilized:
  • As the primary CAD platform for creating wiring box models, extracting fuse positions and types, and defining the geometry used in robotic execution and validation.
  • The goal is to design wiring boxes quickly and easily. Each box contains data about fuse positions and type, Figure 3.
  • To design a collaborative robot to create the Digital Twin of the physical robot. Cazacu et al. [1]. presented this step in a previous study.
  • The computer vision algorithm is connected to Onshape to automate the recognition process.
Both models were created in Onshape (CAD) and are compatible with the smart system described in the workflow. They include all necessary geometric and semantic data—such as fuse types, positions, and connector interfaces—which are crucial for:
  • Automated recognition of fuse layout
  • Programmatic selection of assembly tasks via ThingWorx
  • Robot execution of fuse insertion
  • Real-time validation using the camera and computer vision module
Figure 4 depicts the two fuse boxes employed in the study. Both are mounted on custom-designed blue 3D-printed supports aimed at facilitating vision-based localization and ensuring mechanical stability. The AprilTag markings located at the corners of these fixtures allow the computer vision module to precisely determine the position and orientation of each box. This ensures that during automated validation, the specified Regions of Interest (ROIs) are precisely aligned. To facilitate accurate comparison and closed-loop verification in the smart manufacturing process, the colored fuses in the boxes are methodically organized in configurations that align with their assigned virtual CAD models.
The Onshape CAD model delineates the fuse configuration for each physical box, subsequently relayed to the ThingWorx platform, which interfaces with the robot and the vision system. During validation, the system uses camera input and HSV-based filtering to detect whether the correct fuse type and position match the digital model. The physical setup shown here is therefore crucial for enabling robust digital twin synchronization, automated quality control, and real-time feedback in the overall workflow.
The digital twin implementation for the robotic assembly process has been thoroughly detailed in a previous study entitled “Optimizing Assembly in Wiring Boxes Using API Technology for Digital Twin” [1]. We encourage readers interested in the technical aspects of the robot’s motion planning, task execution, and digital synchronization to consult that publication. In the present work, we focus specifically on the configuration of the fuse box model, the integration of CAD and IoT platforms, and the vision-based validation of the assembly process. The above-mentioned article already comprehensively addresses the actual robotic execution of fuse insertion, which is beyond the scope of this paper.
This integration of digital modeling, robotic assembly, and vision-based validation represents a key component of the Industry 5.0 approach implemented in the study.

4.3. ThingWorx Mashup Selection

We created a unique ThingWorx Mashup interface to make it easier for users to interact with the smart fuse box assembly and validation system, as seen in Figure 5. The control center for starting and monitoring the workflow is this interface.
The interface comprises three input components: a fuse box list selector, a status selector, and a quantity input field. Users can choose a certain fuse box model, import the corresponding CAD model and dataset, and oversee the process status. The number of fuse boxes to be concurrently processed in a batch is determined by the input quantity. The chosen configuration is transmitted to the robot and the computer vision system. Two control buttons, designated as start and emergency stop, commence the procedure and uphold safety standards.
ThingWorx was employed to develop this interface due to its robust IoT integration capabilities, real-time data processing, and smooth connectivity with the robotic system and Onshape CAD models. The platform provides a secure and intuitive interface that allows anyone to control workflows without requiring sophisticated technological skills.
The necessity for dynamic task execution control, intuitive interaction, and operational transparency—crucial components in Industry 5.0 smart manufacturing environments—led to the selection of this interface.
The advantages of this user interface are:
  • The interface simplifies the process of assembly.
  • Any user can access the point.
  • It is offered in a fast and secure environment.

4.4. Computer Vision Module

The computer vision module enables the automatic validation of fuse placement using real-time image processing. It consists of several processing stages, from region identification to tolerance setting and time-based validation, all relying on datasets generated during the preprocessing phase.
The flux for the computer vision module is presented in Figure 6:
The programming section covers two main processes:
  • Creating a data set for image processing.
  • Validating the images in practice.
The dataset generation stage, which encompasses the definition of Regions of Interest (ROIs) and the creation of HSV masks, constitutes a one-time configuration task executed during the initial calibration phase. This guarantees that color thresholds and fuse position mappings are accurately aligned with the physical fuse box utilized in the study. Upon completion of this setup, the system functions in real time without necessitating reconfiguration, facilitating the fully autonomous validation of incoming images according to predefined criteria.

4.4.1. ROI Definition

The method commences with the identification of the Regions of Interest (ROIs) for each fuse situated within the fusebox. The define_roi.py script, Figure 7, enables the user to ascertain the location of each fuse and identify its respective type. This stage generates a pkl file (fuse_roi.pkl) that contains the coordinates and classifications for all regions of interest (ROIs), serving as a reference for future image analysis.
Two ROI definition modes are supported:
  • Manual mode: one-time ROI drawing tool; coordinates saved and reused.
  • Automatic mode (Figure 8): Onshape API → mate connectors (relative to BaseMate) → parse fuse type → AprilTag-based registration → rectification → CAD slot footprints projected to pixel-space → cached ROIs for runtime.
Unless otherwise stated, experiments use the automatic, CAD-derived ROIs; the manual ROI tool was used only for initial setup and as a fallback.
The dataset creation process begins with the execution of the define_roi.py script from the preprocessing module, which enables the user to manually define the position and type of each fuse within the fuse box. In the current implementation, up to eight fuse types can be specified. This ROI definition tool serves as an alternative when automatic extraction from the CAD model (e.g., via the Onshape API) is not feasible or practical. The spatial location and anticipated fuse type are defined, allowing for the interactive selection of regions of interest directly on a reference image. This manual approach is especially beneficial for tailored configurations, preliminary testing, and scenarios where digital twin synchronization is temporarily inaccessible.

4.4.2. HSV Mask Creation

The masking_hsv.py script generates HSV-based color masks for each fuse type, conforming to the specified Regions of Interest (ROIs). The user can interactively establish upper and lower HSV thresholds to extract pertinent fuse pixels, so effectively removing background or superfluous elements. This allows for navigation through fuse categories while the application retrieves previously saved ROI settings. The HSV color model is preferred over the RGB model because it offers enhanced segmentation capabilities, facilitating direct adjustments to color hue, saturation, and brightness. The differentiation of colored elements is further enhanced in terms of intuitiveness and efficiency. Unlike RGB, which requires complex combinations of primary channels, HSV allows users to directly choose a color spectrum and modify tolerance levels through saturation and value parameters. During this process, each mask includes a pixel acceptance threshold that ensures validation only occurs when a sufficient portion of the expected fuse color is detected in its assigned slot. The process results in two output files: hsv_tolerances.pkl, which stores the color range definitions per fuse type, and fuse_roi.pkl, which contains the spatial and classification metadata for each ROI.pkl, which contains the spatial and classification metadata for each ROI. The validation algorithm is designed to confirm the presence of the correct fuse in its designated location while ignoring foreign fuses incorrectly placed elsewhere, thus reducing the impact of false positives.
To simplify the calibration process, the algorithm automatically computes the average HSV color values across all ROIs associated with a specific fuse type, Figure 9. The user is not required to manually identify the precise color range for each fuse; instead, they only need to specify the degree of tolerance, i.e., how much to expand or contract the HSV range around the computed average. This is achieved by adjusting upper and lower bounds for each HSV component individually. The creation of masks is rendered considerably more accessible and less prone to errors, especially when dealing with various fuse types or fluctuating lighting conditions. A second file, hsv_tolerances.pkl, will be generated to contain the color tolerance intervals for each fuse type.

4.4.3. Pixel Tolerance Calibration

The validator_hsv.py script implements the specified HSV masks on each Region of Interest (ROI) to quantify the count of valid pixels that reside within the permissible HSV range. This process represents the final step in dataset preparation. The criteria for assessing the correct positioning of a fuse are defined by the user-defined minimum pixel percentage threshold applicable to each fuse type. The validator module is responsible for loading the two intermediate configuration files, fuse_roi.pkl and hsv_tolerances.pkl, during the validation procedure. Subsequently, each mask is applied individually to the corresponding region of interest (ROI). The threshold value for each type of fuse can be modified by the user through an interactive window that displays the count of valid pixels within the ROI. The specified threshold indicates the minimum percentage of acceptable pixels necessary for a fuse to be deemed valid within its assigned slot. If the percentage of matching pixels meets or exceeds this threshold, the fuse is deemed to be correctly positioned. The final file, fusebox.pkl, serves as the thorough configuration reference for automated validation and contains all of the data, including ROI locations, HSV tolerance ranges, and pixel validation thresholds, after calibration, Figure 10.
In the validator_hsv.py module, the pixel thresholds the minimum percentage of HSV-matching pixels required for a Region of Interest (ROI) to be considered valid—works similarly to a confidence score. The user can adjust this threshold according to the anticipated variation in fuse appearance. If the fuse components exhibit obvious defects such as faded colors, uneven pigmentation, or uneven finishes, a lower threshold may be chosen to increase tolerance. Although this method increases resistance to physical fluctuations, it also raises the possibility of false positives, which could result in the incorrect validation of fuse boxes that are defective.
In high-quality manufacturing environments, which are identified by fuses that are visually consistent and defect-free, higher thresholds may be used to enforce stricter validation requirements. By guaranteeing that only fuse placements with high visual confidence are accepted, this lowers the possibility of false acceptance. The effectiveness of threshold setting is directly related to the quality of the HSV mask established in the prior calibration step. By accurately capturing the color characteristics of each fuse type, a precisely calibrated mask can significantly reduce false positives, enable a more permissive confidence threshold while preserving reliability. The threshold serves as a protective measure against erroneous detections, with its sensitivity adjustable according to the quality of the components and the HSV configuration.

4.4.4. Practical Validation on Time Sequences

Building on the pixel-wise tolerance thresholds defined in the previous step, the following validation mechanism applies a time-sequenced approach to confirm the consistency of detection results under minor fluctuations or lighting inconsistencies.
The two-fusing box’s proof of concept is in the article, but only one type is shown since the steps are the same.
In practice, validation is not based on a single frame but on a time-based sequence. The main program loads the fusebox.pkl file after renaming it according to the model that was chosen in the ThingWorx interface. The system captures multiple frames over a specified time interval, such as 5 s, during operation. It is worked out how often each ROI meets its validation level within the given time frame. This is called the validation frequency.
A fuse ROI is acceptable if it meets the requirements in at least half of the frames. With this temporal statistical validation, mistakes that happen quickly because of changes in lighting or camera movement are less likely to happen. Once this process is done, any broken fuses are found and brought to the operator so they can be fixed if necessary.
A time-based validation system has been implemented to reduce the likelihood of false negatives, particularly those arising from transient factors such as inadequate lighting, reflections, or minor camera movements. The system aggregates detection results over a predetermined duration, typically 5 s, and determines the validity of a fuse only when the confidence score exceeds a specified threshold across multiple frames. This temporal filtering enhances robustness by minimizing short-term fluctuations in visual information. In well-designed production environments with stable lighting and cameras, the validation time can be reduced to one second. This enables a reduction in cycle times while maintaining dependable detection.
The fusebox.pkl file must be renamed to the model name saved in the mashup and then moved to the root directory; there will be several files of the type that will describe several fuse boxes. When the main program runs, the value selected in the mashup for processing is read and a function is started that analyzes the images from the camera for a certain time interval set by the user. The time interval can vary depending on the accuracy of the data set; for data sets that may have errors for very short and constant periods of time, the analysis period can be increased (the rationale for this approach is further discussed in the following section). After processing the images for the preset period, a statistic is made that describes for each fuse, in particular, the percentage of time during which they were valid. In short, it will be counted how many times it was valid in 5 s and how many times it was not, and the percentage of validated moments will be calculated. At this stage, the user defines a final threshold parameter, Figure 11, indicating the minimum percentage of validated frames required for each ROI to be considered correct. Basically, in 5 s, an ROI must be counted valid 50% of the times to be valid.
Unless otherwise specified, we use a 5-s time-window to aggregate frame-level decisions (50% criterion) as a trade-off between reliability under transient noise and throughput; this parameter is user-settable in the ThingWorx interface and may be reduced (e.g., to 1 s) in stable illumination setups.
After this 5-s validation process, a separate window will display the possible fuses that are poorly mounted.
Step 2 is when the user defines the accepted range for each fuse type (in the HSV range). (a color average is made for all fuses of the same type and the user defines an upper and lower tolerance, after which the program extends its color range.)
Step 3 involves calculating the color average for all ROIs of the same type, as well as determining a color average for each individual ROI, as shown in Figure 12, Figure 13 and Figure 14. Determine the distance between the general average and the individual average of an ROI and if it falls within the threshold declared by the user, then it is considered valid (a first type of valid).
Step 4, Figure 15, is where all this data is compiled and put into the main program under the name of the type of machine it is for: there the program has defined a threshold of 50% at 5 s; basically, if for a period of 5 s of validation an ROI is validated in 50% of all frames, then it is considered truly valid; otherwise, it is displayed in a window which fuses are problematic.
Figure 16 illustrates the output of the final validation step. The left pane displays type-specific HSV thresholds and validation results per region of interest (ROI), while the right pane highlights incorrectly validated fuses marked in red with corresponding labels (e.g., “AvgOK0”). This comparison allows for clear identification of faulty placements in real time based on a time-sequence confidence analysis.
At the end of each validation, it is retrieved from ThingWorx if the fuse box model has changed, and the program also changes the data set.
This multi-step approach ensures robust and adaptive validation by combining color segmentation, pixel-level tolerancing, and temporal filtering—critical components for smart manufacturing under the Industry 5.0 paradigm.

5. Results

The smart workflow implemented for fuse box validation attained a 95% accuracy rate, thereby validating the system’s reliability in distinguishing between correct and incorrect fuse placements. In multiple test runs using two different fuse box models, the outcome remained consistent. Even with very little background noise or fluctuating lighting, the vision-based module’s use of HSV filtering and ROI-based detection proved successful in differentiating between fuse types and colors.
The fuse validation process under controlled illumination conditions is shown in Figure 17, which also shows improved color segmentation and improved detection accuracy. The confidence scores for each Region of Interest (ROI) are displayed in the system output on the left; red rectangles indicate detection errors, and green rectangles indicate fuses that were placed correctly. Through the application of HSV filtering and temporal averaging, the scores of “2.67%” and “5.95%” quantitatively represent the degree of alignment between the expected and actual fuse color profiles. A high-resolution camera view of the fuse’s precise location is displayed on the right for visual verification. This dual-view system effectively eliminates false positives caused by glare, shadows, and minor color variations, ensuring perfect synchronization between virtual analysis and real-world conditions. The example demonstrates the system’s robustness under optimal lighting conditions and highlights the significance of illumination quality for visual assessment.
Figure 18 demonstrates a validation failure caused by a missing or improperly placed fuse. The Region of Interest (ROI) displayed a red status and obtained a confidence score of AvgOK: 0, indicating that no HSV-matching pixels were identified within the 5-s validation period. The ThingWorx interface quickly displayed that the ROI was invalid. Errors occasionally arose from transparent or misaligned fuses, making the system’s reliable identification of these issues crucial for sustaining autonomous quality assurance. The validation method utilized time-sequenced image frames, enabling the system to statistically verify the presence of the fuse with a confidence threshold of 50% over a period of 5 s. This method alleviated transient visual disruptions, such as obstructions, illumination discrepancies, or slight camera shifts. The findings illustrate the efficacy of temporal filtering and dynamic thresholding in detecting erroneous insertions or absent components, thus guaranteeing substantial reliability in real-time fuse verification.
A series of validation experiments were conducted on two different fusebox types, each subjected to 100 independent trials under controlled conditions. The results are in Table 1. The system identified 190 of 200 fuse boxes with 95% accuracy. The layout and visual complexity of the fuse configuration affected detection robustness, with Type 1 fuse boxes performing somewhat better at 97% and Type 2 at 93%. The solution also reduced manual intervention time by 85% by automating most assembly verification processes. The transfer of damaged boxes to a correction zone for operator evaluation showed Industry 5.0-compliant workflow segmentation and traceability.

6. Discussion

Compared to traditional manual inspection processes, the proposed automated workflow offers substantial improvements in both speed and accuracy. The use of Onshape for CAD-based modeling allowed precise virtual definition of fuse positions and types, which directly informed the ThingWorx-based decision logic and robotic actions. The integration between these platforms and the computer vision module ensured synchronized data flow and reliable decision-making.
The HSV segmentation method is susceptible to environmental conditions, like lighting intensity and reflections; hence, this study incorporates various mitigation measures. By dynamically calculating HSV thresholds based on the average color values of all designated ROIs and implementing adjustable tolerance limits, the system adapts to fluctuations in light diffusion and color consistency. The application of a 5-s time-windowed validation reduces the likelihood of false negatives caused by transitory oscillations. Regulated artificial illumination is advantageous for robotic assembly cells in production environments. Typically, enclosures or stationary lighting systems are employed to mitigate unpredictable visual noise. Given the outlined conditions, the proposed method is both scalable and reliable in industrial environments. Future advances may improve robustness via adaptive illumination control or machine learning-based segmentation to more efficiently handle transparent or highly reflective components.
Although chosen for its user-friendliness and reliability under controlled illumination, further research will evaluate HSV-based segmentation against RGB color space, depth-based analysis, and AI-based segmentation to determine the comprehensive performance and trade-offs. Alternative color spaces, such as LAB or YCbCr, provide enhanced resilience to variations in illumination; however, they require more intricate calibration processes, which may necessitate further exploration.
Figure 16 and Figure 17 demonstrate that the computer vision system operates not merely on static thresholds; rather, it assesses the presence of fuses dynamically over time. This temporal validation method improves dependability under many illumination situations and fuse types, including partially reflected or transparent variants.
The results indicate that the system can deliver real-time visual feedback, identifying errors as they arise and facilitating corrections through manual or automated feedback loops in subsequent iterations.
Although the performance was generally exceptional, several concerns were noted. Transparent fuses exhibited significant calibration challenges due to their low color saturation, which affected the precision of HSV filtering. Inconsistent ambient lighting may result in false negatives, even when tolerance levels are established. Future editions may address these limitations by integrating more adaptable color calibration algorithms and improved lighting solutions.
The system was evaluated on two different fuse box configurations, each undergoing 100 independent validation cycles. As shown in Table 1 and Figure 19, the overall accuracy across both types reached 95%, with Type 1 fuse boxes achieving a slightly higher success rate (97%) compared to Type 2 (93%). A total of 190 out of 200 fuse boxes were correctly classified, while 10 were misclassified (Figure 20) due to visual noise, inconsistent fuse appearances, or HSV mismatches.
We used the Wilson score method for binomial proportions to calculate 95% confidence intervals (CIs) in order to assess the statistical significance of the reported detection accuracies in Table 1. When sample sizes are moderate or the estimated proportions are near 0% or 100%, as in this study, this approach is more accurate than the conventional Wald approximation.
For a binary classification task (e.g., correct or incorrect fuse detection), the Wilson score interval for a success proportion p ^ = z n , where:
  • x is the number of successful outcomes,
  • n is the total number of trials,
  • z is the critical value from the standard normal distribution (for 95% confidence, z = 1.96)
  • CI is calculated as
C I = p ^ + z 2 2 n ± z · p ^ ( 1 p ^ ) n + z 2 4 n 2 1 + z 2 n
This formula adjusts the interval by accounting for the uncertainty of small samples and keeps the bounds within the valid range of [0, 1].
The value z = 1.96 corresponds to the standard normal distribution, from which approximately 95% of values lie within ±1.96 standard deviations around the mean. Thus, the interval defines a range in which we can be 95% confident that the true detection accuracy lies. The results of calculations are summarized in Table 1.
The difference in performance between the two types can be attributed to structural and chromatic complexity. Type 2 fuse boxes include a higher number of similar-colored fuses placed in denser layouts, increasing the likelihood of misclassification under suboptimal lighting. These results confirm the robustness of the proposed method while also highlighting its sensitivity to color variation and physical fuse alignment—factors that may be mitigated through improved calibration or by introducing depth-aware vision in future iterations.
The system’s modularity and scalability were validated by testing two different fuse box models, and results suggest the workflow can be extended to other types of assemblies with minimal adaptation. The ThingWorx interface, designed for intuitive control, allowed non-expert users to initiate and supervise operations efficiently, demonstrating the approach’s usability and industrial readiness.
The chosen 5-s validation interval was derived from empirical observations during experimentation and provides an effective equilibrium between latency and detection reliability. Nonetheless, given that this parameter is adjustable, subsequent research will encompass a sensitivity analysis to ascertain the best length across diverse environmental and system variables.
This study serves as a proof-of-concept implementation aimed at demonstrating the viability of an HSV-based validation system incorporated inside a digital twin architecture. The 50% confidence criterion utilized in the 5-s time-based validation window was selected heuristically, based on empirical data, to balance robustness and responsiveness. The system demonstrated robust performance under test settings; however, employing a more objective method, such as Receiver Operating Characteristic (ROC) curve analysis, would provide a statistically rigorous means to ascertain the ideal threshold. This study is the first step in a longer research project that will analyze these approaches in subsequent studies to enhance performance and adaptability in a range of industrial settings.
Preliminary investigations indicated that the average system reaction time from CAD input to robot execution trigger was under 500 milliseconds. To guarantee real-time reliability and pinpoint potential improvement opportunities, future initiatives will encompass comprehensive latency assessments for each communication link (CAD–ThingWorx–Robot).
Two distinct types of fuse boxes were employed to validate the system; additional testing with a broader array of geometries, densities, and fuse colors is anticipated in future endeavors to evaluate scalability and generalizability across various configurations.
Subsequent research will investigate applications such as PCB assembly and multi-component sensor modules to further substantiate the system’s generalizability.

7. Conclusions

This study proposed a smart manufacturing workflow for fuse box assembly and validation by integrating CAD, IoT, robotics, and computer vision. The following key outcomes summarize the main contributions and experimental results of the proposed system.
  • The workflow can validate wiring boxes with 95% accuracy as obtained from testing.
  • High-transparency fuses will lead to a more complicated calibration process.
  • Real-time synchronization between the CAD model and physical execution via the digital twin concept enabled adaptive robot task selection based on the chosen wiring box configuration. Unless otherwise stated, experiments use the automatic, CAD-derived ROIs; the manual ROIs are kept as a fallback/ablation and produce an equivalent layout.
  • An evenly lit environment will lead to better color recognition and fewer false negative results.
  • Applying manual intervention time yields an ≈ 85% decrease in operator-active time, consistent with the shift from per-slot manual checks to a single per-cycle action plus rare exceptions.
  • The system is modular, so it can easily adapt to different types of wiring boxes with little setup needed, thanks to the connection between CAD and IoT and smart vision checks using ROI.
The impact on the automated manufacturing industry:
  • Demonstrated Feasibility of a Closed-Loop System:
    The integration of CAD modeling, IoT platforms, robotics, and computer vision into a single closed-loop workflow highlights the feasibility of building self-adaptive manufacturing systems where virtual configurations directly influence physical execution and validation.
  • Time-Based Validation Enhances Robustness:
    The use of time-sequenced frame validation proved effective in increasing detection stability and reducing sensitivity to transient noise, making the system more suitable for real-world deployment.
  • Ease of Deployment for New Models:
    Because of ROI definition tools and standard HSV mask generation scripts, the system can be set up quickly for new fuse box models without needing a lot of reprogramming or retraining.
  • Low-Code Interface Enables Operator Accessibility:
    The ThingWorx user interface allowed for intuitive control of complex backend processes, demonstrating that advanced Industry 5.0 workflows can be accessible to non-expert operators through well-designed user experiences.
  • Potential for Real-Time Feedback Loops:
    Architecture sets the foundation for future implementation of real-time feedback to the robotic system, allowing dynamic correction of faulty insertions without requiring human intervention.
Future research will investigate how to combine depth-aware cameras or 3D sensing with AI-enhanced segmentation (such as convolutional neural networks trained on fuse datasets) to enhance segmentation in varying lighting conditions and with semi-transparent or reflective fuse materials. These methods can increase robustness and generalizability in industrial settings by enabling pixel-level semantic classification and greater invariance to ambient changes.

Author Contributions

Conceptualization, C.-C.C., T.C.N. and M.H.; methodology, C.-C.C. and D.-A.C.; software, T.C.N. and M.H.; validation, C.-C.C., T.C.N., M.H. and D.-A.C.; formal analysis, C.-C.C.; investigation, C.-C.C. and D.-A.C.; resources, C.-C.C., T.C.N. and M.H.; writing—original draft preparation, C.-C.C. and D.-A.C.; writing—review and editing, C.E.C.; supervision, C.E.C.; project administration, C.-C.C.; funding acquisition, C.-C.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by a grant from the National Program for Research of the National Association of Technical Universities—GNAC ARUT 2023.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

Author Dragos-Alexandru Cazacu was employed by the company Education Team, PTC Eastern Europe SRL. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CADComputer-Aided Design
(I)IoT(Industrial) Internet of Things
CPSCyber–Physical Systems
APIApplication Programming Interface
ROIRegions of Interest
HSVHue, Saturation, Value
RGBRed, Green, Blue
CIConfidence Intervals
ROCReceiver Operating Characteristic
SaaSSoftware as a Service
AIArtificial Intelligence
MITManual Intervention Time

References

  1. Cazacu, C.-C.; Iorga, I.; Parpală, R.C.; Popa, C.L.; Coteț, C.E. Optimizing Assembly in Wiring Boxes Using API Technology for Digital Twin. Appl. Sci. 2024, 14, 9483. [Google Scholar] [CrossRef]
  2. Lee, J.; Bagheri, B.; Kao, H.A. A Cyber-Physical Systems architecture for Industry 4.0-based manufacturing systems. Manuf. Lett. 2015, 3, 18–23. [Google Scholar] [CrossRef]
  3. Boyes, H.; Hallaq, B.; Cunningham, J.; Watson, T. The Industrial Internet of Things (IIoT): An analysis framework. Comput. Ind. 2018, 101, 1–12. [Google Scholar] [CrossRef]
  4. Gilchrist, A. Industry 4.0: The Industrial Internet of Things; Apress: Berkeley, CA, USA, 2016; Available online: https://link.springer.com/book/10.1007/978-1-4842-2047-4 (accessed on 12 February 2025).
  5. Grieves, M.; Vickers, J. Digital Twin: Mitigating Unpredictable, Undesirable Emergent Behavior in Complex Systems. In Transdisciplinary Perspectives on Complex Systems; Springer: Cham, Switzerland, 2017; pp. 85–113. [Google Scholar] [CrossRef]
  6. Tao, F.; Zhang, M.; Liu, Y.; Nee, A.Y.C. Digital Twin in Industry: State-of-the-Art. IEEE Trans. Ind. Inform. 2019, 15, 2405–2415. [Google Scholar] [CrossRef]
  7. Qi, Q.; Tao, F.; Hu, T.; Anwer, N.; Liu, A.; Wei, Y.; Wang, L. Enabling technologies and tools for digital twin. J. Manuf. Syst. 2021, 58, 3–21. [Google Scholar] [CrossRef]
  8. Onshape Inc. Cloud-CAD Platform Documentation. Available online: https://www.onshape.com/en/resource-center/tech-tips (accessed on 12 December 2024).
  9. Bécue, A.; Maia, E.; Feeken, L.; Borchers, P.; Praça, I. A New Concept of Digital Twin Supporting Optimization and Resilience of Factories of the Future. Appl. Sci. 2020, 10, 4482. [Google Scholar] [CrossRef]
  10. Huang, Z.; Shen, Y.; Li, J.; Fey, M.; Brecher, C. A Survey on AI-Driven Digital Twins in Industry 4.0: Smart Manufacturing and Advanced Robotics. Sensors 2021, 21, 6340. [Google Scholar] [CrossRef]
  11. Ryalat, M.; ElMoaqet, H.; AlFaouri, M. Design of a Smart Factory Based on Cyber-Physical Systems and Internet of Things towards Industry 4.0. Appl. Sci. 2023, 13, 2156. [Google Scholar] [CrossRef]
  12. Bhandari, B.; Manandhar, P. Integrating Computer Vision and CAD for Precise Dimension Extraction and 3D Solid Model Regeneration for Enhanced Quality Assurance. Machines 2023, 11, 1083. [Google Scholar] [CrossRef]
  13. Rahman, M.A.; Shakur, M.S.; Ahamed, M.S.; Hasan, S.; Rashid, A.A.; Islam, M.A.; Haque, M.S.S.; Ahmed, A. A Cloud-Based Cyber-Physical System with Industry 4.0: Remote and Digitized Additive Manufacturing. Automation 2022, 3, 400–425. [Google Scholar] [CrossRef]
  14. Abbas, M.S.; Hussain, R.; Zaidi, S.F.A.; Lee, D.; Park, C. Computer Vision-Based Safety Monitoring of Mobile Scaffolding Integrating Depth Sensors. Buildings 2025, 15, 2147. [Google Scholar] [CrossRef]
  15. Furferi, R.; Servi, M. A Machine Vision-Based Algorithm for Color Classification of Recycled Wool Fabrics. Appl. Sci. 2023, 13, 2464. [Google Scholar] [CrossRef]
  16. Yang, S.; Pan, W.; Li, M.; Yin, M.; Ren, H.; Chang, Y.; Liu, Y.; Zhang, S.; Lou, F. Industrial Internet of Things Intrusion Detection System Based on Graph Neural Network. Symmetry 2025, 17, 997. [Google Scholar] [CrossRef]
  17. Liu, Z.; Davoli, F.; Borsatti, D. Industrial Internet of Things (IIoT): Trends and Technologies. Future Internet 2025, 17, 213. [Google Scholar] [CrossRef]
  18. Dhanda, M.; Rogers, B.; Hall, S.; Dekoninck, E.; Dhokia, V. Reviewing human-robot collaboration in manufacturing: Opportunities and challenges in the context of industry 5.0. Robot. Comput.-Integr. Manuf. 2025, 93, 102937. [Google Scholar] [CrossRef]
  19. Matheson, E.; Minto, R.; Zampieri, E.G.G.; Faccio, M.; Rosati, G. Human–Robot Collaboration in Manufacturing Applications: A Review. Robotics 2019, 8, 100. [Google Scholar] [CrossRef]
  20. Yang, H.; Kumara, S.; Bukkapatnam, S.T.S.; Tsung, F. The Internet of Things for Smart Manufacturing—A Review. IIE Trans. 2019, 51, 1190–1216. Available online: https://www.researchgate.net/publication/330408457_The_Internet_of_Things_for_Smart_Manufacturing_A_Review (accessed on 12 February 2025). [CrossRef]
  21. Tao, F.; Qi, Q.; Liu, A.; Kusiak, A. Data-driven Smart Manufacturing. J. Manuf. Syst. 2018, 48, 157–169. Available online: https://www.researchgate.net/publication/322566556_Data-driven_smart_manufacturing (accessed on 9 March 2025). [CrossRef]
  22. Vilanova, J.R.; Luque, R.; Vega, P.; Ferrera, E. Human-Robot Collaboration in the Industry 5.0 Era: The Aerospace Perspective. European Robotics Forum. 2024, 33, 174. Available online: https://www.researchgate.net/publication/387578287_Human-Robot_Collaboration_in_the_Industry_50_Era_The_Aerospace_Perspective (accessed on 26 March 2025).
  23. Sun, D.; Li, W.; Shen, Y.; Chen, H.; Tan, Z.; Ding, X. Application practice of intelligent algorithm and digital twin in electromechanical equipment. Sci. Bull. Univ. Politeh. Buchar. Ser. D 2024, 86, 235–256. Available online: https://www.scientificbulletin.upb.ro/rev_docs_arhiva/full82f_322545.pdf (accessed on 16 March 2025).
  24. Calianu, G.; Carcadea, E.; Lupu, C.; Petrescu-Niță, A. Creating an adaptive data collecting system based on IoT devices. Sci. Bull. Univ. Politeh. Buchar. Ser. C 2025, 87, 61–72. Available online: https://www.scientificbulletin.upb.ro/rev_docs_arhiva/full465_934718.pdf (accessed on 16 March 2025).
  25. Zhong, R.Y.; Xu, X.; Klotz, E.; Newman, S.T. Intelligent Manufacturing in the Context of Industry 4.0: A Review. Engineering 2017, 3, 616–630. Available online: https://www.sciencedirect.com/science/article/pii/S2095809917307130 (accessed on 18 March 2025). [CrossRef]
Figure 1. System Architecture.
Figure 1. System Architecture.
Applsci 15 09375 g001
Figure 2. Smart Manufacturing Workflow: a Combined IoT, CAD, and Computer Vision Approach.
Figure 2. Smart Manufacturing Workflow: a Combined IoT, CAD, and Computer Vision Approach.
Applsci 15 09375 g002
Figure 3. Presents CAD models of fuse boxes that include defined fuse positions and connectors for smart assembly integration: Part (a) illustrates a multi-compartment fuse box featuring a high-density layout. The model’s numerous fuse slots, each of which is marked with a circular indicator, represent regions of interest (ROIs) used in computer vision validation. Additionally, complex electrical connectors are present, enabling accurate data extraction and seamless integration with the Internet of Things platform (ThingWorx). These connectors are crucial for mapping the virtual model to the actual assembly and guaranteeing robotic precision. (b) displays a second fuse box model that is more compact and likely intended for a different application or space constraint. It features numerous rows of clearly marked connector ports and fuses. The fuses are color-coded, and key locations are marked for verification. Similar to the first model, this one is intended to be imported into the system architecture in order to direct vision-based inspection and robotic assembly.
Figure 3. Presents CAD models of fuse boxes that include defined fuse positions and connectors for smart assembly integration: Part (a) illustrates a multi-compartment fuse box featuring a high-density layout. The model’s numerous fuse slots, each of which is marked with a circular indicator, represent regions of interest (ROIs) used in computer vision validation. Additionally, complex electrical connectors are present, enabling accurate data extraction and seamless integration with the Internet of Things platform (ThingWorx). These connectors are crucial for mapping the virtual model to the actual assembly and guaranteeing robotic precision. (b) displays a second fuse box model that is more compact and likely intended for a different application or space constraint. It features numerous rows of clearly marked connector ports and fuses. The fuses are color-coded, and key locations are marked for verification. Similar to the first model, this one is intended to be imported into the system architecture in order to direct vision-based inspection and robotic assembly.
Applsci 15 09375 g003
Figure 4. Physical Fuse Boxes Mounted in 3D-Printed Fixtures with AprilTag Markers for Vision-Based Validation. The two CAD models show the versatility of the suggested workflow by representing two distinct fuse box types: (a) Type 1 fuse boxes, which have several compartments and are high-density, and (b) Type 2 fuse boxes, which are compact and ideal for spaces that are limited.
Figure 4. Physical Fuse Boxes Mounted in 3D-Printed Fixtures with AprilTag Markers for Vision-Based Validation. The two CAD models show the versatility of the suggested workflow by representing two distinct fuse box types: (a) Type 1 fuse boxes, which have several compartments and are high-density, and (b) Type 2 fuse boxes, which are compact and ideal for spaces that are limited.
Applsci 15 09375 g004
Figure 5. ThingWorx Mashup Selection.
Figure 5. ThingWorx Mashup Selection.
Applsci 15 09375 g005
Figure 6. The flux for computer vision module.
Figure 6. The flux for computer vision module.
Applsci 15 09375 g006
Figure 7. ROI Definition program.
Figure 7. ROI Definition program.
Applsci 15 09375 g007
Figure 8. API extraction (Onshape mates/BaseMate) and Overlay/projection—the screenshot that shows how slot footprints are overlaid/saved for each fuse.
Figure 8. API extraction (Onshape mates/BaseMate) and Overlay/projection—the screenshot that shows how slot footprints are overlaid/saved for each fuse.
Applsci 15 09375 g008
Figure 9. HSV Mask Creation.
Figure 9. HSV Mask Creation.
Applsci 15 09375 g009
Figure 10. Tolerance program.
Figure 10. Tolerance program.
Applsci 15 09375 g010
Figure 11. ROI detection with optimal HSV thresholds under controlled lighting conditions. The contours correctly highlight the target fuse areas. The detection of regions of interest (ROI) is performed using optimal HSV thresholds in controlled lighting conditions. The contours correctly highlight the target fuse areas.
Figure 11. ROI detection with optimal HSV thresholds under controlled lighting conditions. The contours correctly highlight the target fuse areas. The detection of regions of interest (ROI) is performed using optimal HSV thresholds in controlled lighting conditions. The contours correctly highlight the target fuse areas.
Applsci 15 09375 g011
Figure 12. HSV Mask Calibration for Green Fuses (Type 8) Using ROI Segmentation and Interactive Threshold Adjustment. The calibration of pixel tolerance is demonstrated. The yellow overlay indicates areas where color variation is within accepted bounds, confirming robust ROI extraction.
Figure 12. HSV Mask Calibration for Green Fuses (Type 8) Using ROI Segmentation and Interactive Threshold Adjustment. The calibration of pixel tolerance is demonstrated. The yellow overlay indicates areas where color variation is within accepted bounds, confirming robust ROI extraction.
Applsci 15 09375 g012
Figure 13. HSV Range Tuning for Blue Fuses (Type 6) with Real-Time Visual Feedback on Mask Accuracy. Frame consistency in the highlighted area demonstrates effective masking and threshold reliability.
Figure 13. HSV Range Tuning for Blue Fuses (Type 6) with Real-Time Visual Feedback on Mask Accuracy. Frame consistency in the highlighted area demonstrates effective masking and threshold reliability.
Applsci 15 09375 g013
Figure 14. HSV Range Tuning for Blue Fuses (Type 6) with Real-Time Visual Feedback on Mask Accuracy. The result of the ROI validation after applying pixel tolerance filtering is presented. All defined ROIs pass the criteria for pixel count and position, confirming correct fuse insertion.
Figure 14. HSV Range Tuning for Blue Fuses (Type 6) with Real-Time Visual Feedback on Mask Accuracy. The result of the ROI validation after applying pixel tolerance filtering is presented. All defined ROIs pass the criteria for pixel count and position, confirming correct fuse insertion.
Applsci 15 09375 g014
Figure 15. Time-Based Validation of Fuse Presence Using ROI Confidence Thresholds and Real-Time Camera Feed. Control Window for Visual Validation Monitoring. Real-time output indicates which fuses meet HSV and position criteria using green/red visual flags.
Figure 15. Time-Based Validation of Fuse Presence Using ROI Confidence Thresholds and Real-Time Camera Feed. Control Window for Visual Validation Monitoring. Real-time output indicates which fuses meet HSV and position criteria using green/red visual flags.
Applsci 15 09375 g015
Figure 16. Final Fuse Validation Output. This output displays the correct (green) and faulty (red) regions of interest (ROIs) using type-based color thresholds. The final validation output was generated after a 5-s stability assessment. All conditions are met for successful fuse box validation, confirming robustness of the time-based method.
Figure 16. Final Fuse Validation Output. This output displays the correct (green) and faulty (red) regions of interest (ROIs) using type-based color thresholds. The final validation output was generated after a 5-s stability assessment. All conditions are met for successful fuse box validation, confirming robustness of the time-based method.
Applsci 15 09375 g016
Figure 17. Visual Validation Output in Optimal Lighting Conditions.
Figure 17. Visual Validation Output in Optimal Lighting Conditions.
Applsci 15 09375 g017
Figure 18. Detection of Incorrectly Placed or Missing Fuses.
Figure 18. Detection of Incorrectly Placed or Missing Fuses.
Applsci 15 09375 g018
Figure 19. Detection accuracy with 95% Wilson confidence intervals for each fuse-box type and overall (n = 200).
Figure 19. Detection accuracy with 95% Wilson confidence intervals for each fuse-box type and overall (n = 200).
Applsci 15 09375 g019
Figure 20. Outcomes validation per fuse boxe type: correct vs. incorrect counts.
Figure 20. Outcomes validation per fuse boxe type: correct vs. incorrect counts.
Applsci 15 09375 g020
Table 1. Validation Results for Two Fusebox Types Based on Detection Accuracy.
Table 1. Validation Results for Two Fusebox Types Based on Detection Accuracy.
DescriptionType 1 FuseboxType 2 FuseboxTotal
Number of tests100100200
Success rate97%93%95%
Correctly identified boxes9793190
Incorrectly identified boxes3710
Confidence Interval (Wilson)[91.6%, 99.0%][86.3%, 96.6%][91.0%, 97.3%]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cazacu, C.-C.; Nasu, T.C.; Hanga, M.; Cazacu, D.-A.; Cotet, C.E. Smart Manufacturing Workflow for Fuse Box Assembly and Validation: A Combined IoT, CAD, and Machine Vision Approach. Appl. Sci. 2025, 15, 9375. https://doi.org/10.3390/app15179375

AMA Style

Cazacu C-C, Nasu TC, Hanga M, Cazacu D-A, Cotet CE. Smart Manufacturing Workflow for Fuse Box Assembly and Validation: A Combined IoT, CAD, and Machine Vision Approach. Applied Sciences. 2025; 15(17):9375. https://doi.org/10.3390/app15179375

Chicago/Turabian Style

Cazacu, Carmen-Cristiana, Teodor Cristian Nasu, Mihail Hanga, Dragos-Alexandru Cazacu, and Costel Emil Cotet. 2025. "Smart Manufacturing Workflow for Fuse Box Assembly and Validation: A Combined IoT, CAD, and Machine Vision Approach" Applied Sciences 15, no. 17: 9375. https://doi.org/10.3390/app15179375

APA Style

Cazacu, C.-C., Nasu, T. C., Hanga, M., Cazacu, D.-A., & Cotet, C. E. (2025). Smart Manufacturing Workflow for Fuse Box Assembly and Validation: A Combined IoT, CAD, and Machine Vision Approach. Applied Sciences, 15(17), 9375. https://doi.org/10.3390/app15179375

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop