2. Related Work
Real-time production system monitoring, control, and optimization are made possible by the Internet of Things (IoT) paradigm [
2,
3], which also ensures connectivity and interoperability among various industrial systems to provide scalable automation [
4]. Creating responsive and robust manufacturing environments requires dynamic modeling, performance prediction, and operational synchronization, all of which are made possible by a digital twin, which is a real-time digital version of a physical entity updated via sensor and IoT data streams [
5,
6]. Discrete manufacturing benefits greatly from its connection with IIoT platforms, which enable fast feedback mechanisms and closed-loop control [
7]. Moreover, the use of cloud-based CAD programs like Onshape improves the capabilities of digital twins by making model-based representations available at all organizational levels [
8].
Cyber–physical systems (CPS), digital twins, and machine vision are all integrated within Industry 4.0 frameworks, as demonstrated by recent developments in smart manufacturing systems. A comprehensive digital twin architecture was proposed by Bécue et al. [
9] with the aim of enhancing optimization and operational resilience in future factories, while Huang et al. [
10] offered a detailed analysis of AI-driven digital twins, highlighting their significance in intelligent robotics and smart industrial applications. CPS remains a key component, as shown by Ryalat et al. [
11], who offered a smart factory model integrating CPS and IoT principles for effective Industry 4.0 deployment. To support this, Cazacu et al. [
1] demonstrated the effectiveness of API-based digital twins in expediting wiring box assembly processes, emphasizing the importance of a smooth integration between CAD systems and execution environments.
To enhance quality assurance, Bhandari and Manandhar [
12] explored CAD–computer vision integration for precise 3D model reconstruction. Rahman et al. [
13] presented a cloud-based CPS for remote additive manufacturing, indicating the growing feasibility of distributed manufacturing systems. Abbas et al. [
14] developed a safety monitoring system using computer vision and depth sensors, extending the application scope beyond assembly quality control.
Furferi and Servi [
15] demonstrated the use of machine vision for precise color classification, relevant to complex textures such as wool textiles. Yang et al. [
16] employed a graph neural network for intrusion detection in the Industrial Internet of Things (IIoT) to increase cyber-resilience in manufacturing networks. Liu et al. [
17] reviewed IIoT trends and implementation technologies, while Dhanda et al. [
18] examined the challenges of human–robot collaboration in the context of Industry 5.0. Matheson et al. [
19] offered more details on human–robot interaction in the manufacturing sector.
The Internet of Things, smart automation, and digital twin technologies are used to create intelligent manufacturing systems. Yang et al.’s comprehensive analysis of IoT in smart manufacturing [
20] emphasizes the need for data security, scalability, and interoperability in addition to real-time sensing and decision-making for the development of smart factory infrastructure. According to Tao et al. [
21], our suggested system’s fuse validation procedures can be greatly improved by the real-time feedback and predictive analytics made possible by IoT-based, data-driven architectures.
Vilanova et al. [
22] highlight the human-centric paradigm in smart manufacturing, underscore the transition to collaborative intelligence in Industry 5.0, and advocate for human–robot interaction, wherein adaptable robotic decision-making enhances human supervision. In electromechanical environments, Sun et al. [
23] illustrate that the integration of intelligent algorithms with digital twin frameworks facilitates real-time simulation at the system architecture level, enhancing fault detection and configuration control—elements that are especially vital for high-precision operations such as fuse box assembly.
Calianu et al. [
24] underlines the importance of modularity and dynamic reconfiguration in the development of an adaptive data gathering system utilizing IoT nodes for industrial sensing platforms. This methodology aligns with our modular architecture, which consolidates data collected from CAD and visual systems for synchronized processing through an Internet of Things hub. In an in depth study of cloud manufacturing and Industry 4.0 approaches, Zhong et al. [
25] accentuated the need for cloud-based control systems and decentralized data processing, particularly in production environments with dynamic configurations. This demonstrates the importance of incorporating the Onshape and ThingWorx platforms into our architecture to facilitate real-time synchronization between digital and physical models.
In summary, recent literature shows that combining cloud-based CAD (digital twins), IIoT connectivity, and computer vision creates powerful smart workflows for assembly. Each component—IoT, CAD models, robots, and computer vision—feeds into a unified Industry 5.0 architecture that supports real-time orchestration and traceability. Prior studies of wiring-box assembly (e.g., using Onshape CAD and API-based digital twins) demonstrated the viability of this approach [
1]. Building on that, the current trend is to close the loop: every action of the robot is monitored by computer vision, synced through the IIoT platform, and reflected in the virtual model. Such integration not only automates validation but also enables predictive and adaptive control, fulfilling the promise of intelligent, flexible manufacturing
3. Methodology
This article builds upon the previous work by Cazacu et al. [
1], which detailed the fuse assembly process in wiring boxes. In actual research, the aim is to provide a proof-of-concept for the entire smart manufacturing workflow, starting with the selection of the wiring box type, followed by the automated transmission of assembly instructions to the robot, and concluding with the validation and quality inspection of the final assembly.
The previous method [
1] allowed automated fuse insertion using static geometric data with minimal runtime validation or feedback by establishing a unidirectional connection between the collaborative robot and the CAD model (Onshape). The present methodology in this study is greatly enhanced by the ThingWorx IoT platform, which permits real-time monitoring, synchronization, and bidirectional communication between virtual and physical systems. The digital twin is now responsible for execution, pre-assembly configuration, and post-assembly validation. The solution enables the operator to choose the type of fuse box via an Internet of Things dashboard. Upon independently acquiring the relevant CAD data, the robot modifies its insertion technique. The vision module detects anomalies including misaligned robots, sensor drift, and improperly orientated or labeled fuses during operation. ThingWorx relays the feedback. Upon identifying inconsistencies, the digital twin modifies its internal state and initiates corrective measures, including halting the procedure or altering the robot’s trajectory.
Following the assembly, a validation process employing computer vision is commenced. The camera evaluates each fusion using color segmentation (HSV), region of interest (ROI) matching, and threshold logic. The outcomes are juxtaposed with the anticipated CAD configuration, and the assembly is categorized as successful or unsuccessful. Upon obtaining this decision through the IoT dashboard, the detection, execution, and validation cycle are complete. This closed-loop architecture aligns with the fundamental principles of Industry 4.0 and 5.0, evolving the system from a static automation script into a fully responsive, self-monitoring intelligent assembly cell.
This research outlines the development of a unique interface on the ThingWorx platform that enables users to select the suitable type of wiring box for assembly. The robot can independently identify the chosen wiring box and execute the corresponding assembly program by merging the ThingWorx environment with a collaborative robotic arm and a cloud-based CAD system (Onshape). The system initiates an automatic quality inspection phase based on a computer vision module following the installation of the fuses.
Each fuse box design was subjected to 100 independent trials, resulting in 200 test samples to evaluate the system’s performance. To emulate regulated manufacturing conditions, all experiments were conducted with fixed overhead LED panels and uniform artificial illumination. To reduce background noise and reflections, the robot worked in a small area. Binary classification was used to evaluate the visual validation’s accuracy in identifying the type of fuse and where it was placed for each ROI. When a detection’s color and position matched the CAD requirements, it was deemed accurate. Based on the total number of accurate classifications across all test instances, a final accuracy of 95% was achieved. We calculated the 95% confidence interval using the Wilson score method to assess statistical robustness, thereby ensuring the reliability of the findings presented.
Manual-intervention time (
MIT) is defined as operator-active time per cycle (recipe selection, visual checks, confirmations), excluding fixture load/unload and machine-only operations.
MIT values were derived from ThingWorx timestamps and verified by stopwatch. We summarize each condition by the median across repeated cycles and compute the relative reduction as:
This intelligent production line is adaptable for various discrete assembly tasks using modular components and configurable logic; however, validation to date has primarily concentrated on fuse box scenarios.
4. Proof of Concept
4.1. System Architecture Overview
The system architecture depicted in
Figure 1 incorporates four crucial components: ThingWorx (9.3.16) is an Internet of Things platform that allows users to select the type of fuse box and start assembling it by synchronizing data between the virtual model and physical execution; (1) a collaborative robot that gets instructions from ThingWorx (2) and automatically inserts fuses according to the CAD configuration; (3) a camera module that takes pictures of the assembled fuse box and uses HSV-based segmentation to perform automated visual validation; and (4) Onshape (SaaS Software, cloud SaaS, no user-visible versioning due to continuous deployment), a cloud-based CAD platform that models fuse boxes with accurate fuse types and locations. Technology provides full automation of assembly, enables real-time synchronization between the CAD model and robotic execution via a digital twin framework, and improves detection reliability through statistical validation at 5-s intervals. Its modular and extensible architecture enables straightforward adaptation to many assembly tasks beyond fuse boxes. The system design of this study demonstrates the connection between the computer vision module, an IoT orchestration layer (ThingWorx 9.3.16), a collaborative robot (Cobot), and the Onshape CAD platform. The architecture enables closed-loop validation and real-time data synchronization for fuse box construction operations. To verify precise fuse placement, the computer vision technology integrated into this workflow relies on image capture and processing. Several key Python libraries were used for implementation: matplotlib (v3.10.1) for development debugging and visualization; pupil_apriltags (v1.0.4.post11) for precise localization using fiducial markers; NumPy (v2.2.5) for numerical computations; and OpenCV (v4.11.0.86) for image processing and HSV masking. All algorithms were created and implemented using Python 3.12.2 to guarantee module compatibility and enable efficient automation within the ThingWorx-integrated environment.
The comprehensive smart manufacturing workflow proposed in this study is illustrated in
Figure 2. This illustrates the automation of the fuse assembly and validation process with the integration of the CAD environment, IoT platform, digital twin, collaborative robot, and computer vision module. The process begins with the selection of the wiring box model and concludes with a decision node that assesses the need for rework based on visual examination. This closed-loop architecture facilitates adaptive control and traceability throughout the manufacturing cycle, ensuring real-time synchronization.
4.2. Onshape CAD Models
Onshape is utilized:
As the primary CAD platform for creating wiring box models, extracting fuse positions and types, and defining the geometry used in robotic execution and validation.
The goal is to design wiring boxes quickly and easily. Each box contains data about fuse positions and type,
Figure 3.
To design a collaborative robot to create the Digital Twin of the physical robot. Cazacu et al. [
1]. presented this step in a previous study.
The computer vision algorithm is connected to Onshape to automate the recognition process.
Both models were created in Onshape (CAD) and are compatible with the smart system described in the workflow. They include all necessary geometric and semantic data—such as fuse types, positions, and connector interfaces—which are crucial for:
Automated recognition of fuse layout
Programmatic selection of assembly tasks via ThingWorx
Robot execution of fuse insertion
Real-time validation using the camera and computer vision module
Figure 4 depicts the two fuse boxes employed in the study. Both are mounted on custom-designed blue 3D-printed supports aimed at facilitating vision-based localization and ensuring mechanical stability. The AprilTag markings located at the corners of these fixtures allow the computer vision module to precisely determine the position and orientation of each box. This ensures that during automated validation, the specified Regions of Interest (ROIs) are precisely aligned. To facilitate accurate comparison and closed-loop verification in the smart manufacturing process, the colored fuses in the boxes are methodically organized in configurations that align with their assigned virtual CAD models.
The Onshape CAD model delineates the fuse configuration for each physical box, subsequently relayed to the ThingWorx platform, which interfaces with the robot and the vision system. During validation, the system uses camera input and HSV-based filtering to detect whether the correct fuse type and position match the digital model. The physical setup shown here is therefore crucial for enabling robust digital twin synchronization, automated quality control, and real-time feedback in the overall workflow.
The digital twin implementation for the robotic assembly process has been thoroughly detailed in a previous study entitled “Optimizing Assembly in Wiring Boxes Using API Technology for Digital Twin” [
1]. We encourage readers interested in the technical aspects of the robot’s motion planning, task execution, and digital synchronization to consult that publication. In the present work, we focus specifically on the configuration of the fuse box model, the integration of CAD and IoT platforms, and the vision-based validation of the assembly process. The above-mentioned article already comprehensively addresses the actual robotic execution of fuse insertion, which is beyond the scope of this paper.
This integration of digital modeling, robotic assembly, and vision-based validation represents a key component of the Industry 5.0 approach implemented in the study.
4.3. ThingWorx Mashup Selection
We created a unique ThingWorx Mashup interface to make it easier for users to interact with the smart fuse box assembly and validation system, as seen in
Figure 5. The control center for starting and monitoring the workflow is this interface.
The interface comprises three input components: a fuse box list selector, a status selector, and a quantity input field. Users can choose a certain fuse box model, import the corresponding CAD model and dataset, and oversee the process status. The number of fuse boxes to be concurrently processed in a batch is determined by the input quantity. The chosen configuration is transmitted to the robot and the computer vision system. Two control buttons, designated as start and emergency stop, commence the procedure and uphold safety standards.
ThingWorx was employed to develop this interface due to its robust IoT integration capabilities, real-time data processing, and smooth connectivity with the robotic system and Onshape CAD models. The platform provides a secure and intuitive interface that allows anyone to control workflows without requiring sophisticated technological skills.
The necessity for dynamic task execution control, intuitive interaction, and operational transparency—crucial components in Industry 5.0 smart manufacturing environments—led to the selection of this interface.
The advantages of this user interface are:
The interface simplifies the process of assembly.
Any user can access the point.
It is offered in a fast and secure environment.
4.4. Computer Vision Module
The computer vision module enables the automatic validation of fuse placement using real-time image processing. It consists of several processing stages, from region identification to tolerance setting and time-based validation, all relying on datasets generated during the preprocessing phase.
The flux for the computer vision module is presented in
Figure 6:
The programming section covers two main processes:
The dataset generation stage, which encompasses the definition of Regions of Interest (ROIs) and the creation of HSV masks, constitutes a one-time configuration task executed during the initial calibration phase. This guarantees that color thresholds and fuse position mappings are accurately aligned with the physical fuse box utilized in the study. Upon completion of this setup, the system functions in real time without necessitating reconfiguration, facilitating the fully autonomous validation of incoming images according to predefined criteria.
4.4.1. ROI Definition
The method commences with the identification of the Regions of Interest (ROIs) for each fuse situated within the fusebox. The define_roi.py script,
Figure 7, enables the user to ascertain the location of each fuse and identify its respective type. This stage generates a pkl file (fuse_roi.pkl) that contains the coordinates and classifications for all regions of interest (ROIs), serving as a reference for future image analysis.
Two ROI definition modes are supported:
Unless otherwise stated, experiments use the automatic, CAD-derived ROIs; the manual ROI tool was used only for initial setup and as a fallback.
The dataset creation process begins with the execution of the define_roi.py script from the preprocessing module, which enables the user to manually define the position and type of each fuse within the fuse box. In the current implementation, up to eight fuse types can be specified. This ROI definition tool serves as an alternative when automatic extraction from the CAD model (e.g., via the Onshape API) is not feasible or practical. The spatial location and anticipated fuse type are defined, allowing for the interactive selection of regions of interest directly on a reference image. This manual approach is especially beneficial for tailored configurations, preliminary testing, and scenarios where digital twin synchronization is temporarily inaccessible.
4.4.2. HSV Mask Creation
The masking_hsv.py script generates HSV-based color masks for each fuse type, conforming to the specified Regions of Interest (ROIs). The user can interactively establish upper and lower HSV thresholds to extract pertinent fuse pixels, so effectively removing background or superfluous elements. This allows for navigation through fuse categories while the application retrieves previously saved ROI settings. The HSV color model is preferred over the RGB model because it offers enhanced segmentation capabilities, facilitating direct adjustments to color hue, saturation, and brightness. The differentiation of colored elements is further enhanced in terms of intuitiveness and efficiency. Unlike RGB, which requires complex combinations of primary channels, HSV allows users to directly choose a color spectrum and modify tolerance levels through saturation and value parameters. During this process, each mask includes a pixel acceptance threshold that ensures validation only occurs when a sufficient portion of the expected fuse color is detected in its assigned slot. The process results in two output files: hsv_tolerances.pkl, which stores the color range definitions per fuse type, and fuse_roi.pkl, which contains the spatial and classification metadata for each ROI.pkl, which contains the spatial and classification metadata for each ROI. The validation algorithm is designed to confirm the presence of the correct fuse in its designated location while ignoring foreign fuses incorrectly placed elsewhere, thus reducing the impact of false positives.
To simplify the calibration process, the algorithm automatically computes the average HSV color values across all ROIs associated with a specific fuse type,
Figure 9. The user is not required to manually identify the precise color range for each fuse; instead, they only need to specify the degree of tolerance, i.e., how much to expand or contract the HSV range around the computed average. This is achieved by adjusting upper and lower bounds for each HSV component individually. The creation of masks is rendered considerably more accessible and less prone to errors, especially when dealing with various fuse types or fluctuating lighting conditions. A second file, hsv_tolerances.pkl, will be generated to contain the color tolerance intervals for each fuse type.
4.4.3. Pixel Tolerance Calibration
The validator_hsv.py script implements the specified HSV masks on each Region of Interest (ROI) to quantify the count of valid pixels that reside within the permissible HSV range. This process represents the final step in dataset preparation. The criteria for assessing the correct positioning of a fuse are defined by the user-defined minimum pixel percentage threshold applicable to each fuse type. The validator module is responsible for loading the two intermediate configuration files, fuse_roi.pkl and hsv_tolerances.pkl, during the validation procedure. Subsequently, each mask is applied individually to the corresponding region of interest (ROI). The threshold value for each type of fuse can be modified by the user through an interactive window that displays the count of valid pixels within the ROI. The specified threshold indicates the minimum percentage of acceptable pixels necessary for a fuse to be deemed valid within its assigned slot. If the percentage of matching pixels meets or exceeds this threshold, the fuse is deemed to be correctly positioned. The final file, fusebox.pkl, serves as the thorough configuration reference for automated validation and contains all of the data, including ROI locations, HSV tolerance ranges, and pixel validation thresholds, after calibration,
Figure 10.
In the validator_hsv.py module, the pixel thresholds the minimum percentage of HSV-matching pixels required for a Region of Interest (ROI) to be considered valid—works similarly to a confidence score. The user can adjust this threshold according to the anticipated variation in fuse appearance. If the fuse components exhibit obvious defects such as faded colors, uneven pigmentation, or uneven finishes, a lower threshold may be chosen to increase tolerance. Although this method increases resistance to physical fluctuations, it also raises the possibility of false positives, which could result in the incorrect validation of fuse boxes that are defective.
In high-quality manufacturing environments, which are identified by fuses that are visually consistent and defect-free, higher thresholds may be used to enforce stricter validation requirements. By guaranteeing that only fuse placements with high visual confidence are accepted, this lowers the possibility of false acceptance. The effectiveness of threshold setting is directly related to the quality of the HSV mask established in the prior calibration step. By accurately capturing the color characteristics of each fuse type, a precisely calibrated mask can significantly reduce false positives, enable a more permissive confidence threshold while preserving reliability. The threshold serves as a protective measure against erroneous detections, with its sensitivity adjustable according to the quality of the components and the HSV configuration.
4.4.4. Practical Validation on Time Sequences
Building on the pixel-wise tolerance thresholds defined in the previous step, the following validation mechanism applies a time-sequenced approach to confirm the consistency of detection results under minor fluctuations or lighting inconsistencies.
The two-fusing box’s proof of concept is in the article, but only one type is shown since the steps are the same.
In practice, validation is not based on a single frame but on a time-based sequence. The main program loads the fusebox.pkl file after renaming it according to the model that was chosen in the ThingWorx interface. The system captures multiple frames over a specified time interval, such as 5 s, during operation. It is worked out how often each ROI meets its validation level within the given time frame. This is called the validation frequency.
A fuse ROI is acceptable if it meets the requirements in at least half of the frames. With this temporal statistical validation, mistakes that happen quickly because of changes in lighting or camera movement are less likely to happen. Once this process is done, any broken fuses are found and brought to the operator so they can be fixed if necessary.
A time-based validation system has been implemented to reduce the likelihood of false negatives, particularly those arising from transient factors such as inadequate lighting, reflections, or minor camera movements. The system aggregates detection results over a predetermined duration, typically 5 s, and determines the validity of a fuse only when the confidence score exceeds a specified threshold across multiple frames. This temporal filtering enhances robustness by minimizing short-term fluctuations in visual information. In well-designed production environments with stable lighting and cameras, the validation time can be reduced to one second. This enables a reduction in cycle times while maintaining dependable detection.
The fusebox.pkl file must be renamed to the model name saved in the mashup and then moved to the root directory; there will be several files of the type that will describe several fuse boxes. When the main program runs, the value selected in the mashup for processing is read and a function is started that analyzes the images from the camera for a certain time interval set by the user. The time interval can vary depending on the accuracy of the data set; for data sets that may have errors for very short and constant periods of time, the analysis period can be increased (the rationale for this approach is further discussed in the following section). After processing the images for the preset period, a statistic is made that describes for each fuse, in particular, the percentage of time during which they were valid. In short, it will be counted how many times it was valid in 5 s and how many times it was not, and the percentage of validated moments will be calculated. At this stage, the user defines a final threshold parameter,
Figure 11, indicating the minimum percentage of validated frames required for each ROI to be considered correct. Basically, in 5 s, an ROI must be counted valid 50% of the times to be valid.
Unless otherwise specified, we use a 5-s time-window to aggregate frame-level decisions (50% criterion) as a trade-off between reliability under transient noise and throughput; this parameter is user-settable in the ThingWorx interface and may be reduced (e.g., to 1 s) in stable illumination setups.
After this 5-s validation process, a separate window will display the possible fuses that are poorly mounted.
Step 2 is when the user defines the accepted range for each fuse type (in the HSV range). (a color average is made for all fuses of the same type and the user defines an upper and lower tolerance, after which the program extends its color range.)
Step 3 involves calculating the color average for all ROIs of the same type, as well as determining a color average for each individual ROI, as shown in
Figure 12,
Figure 13 and
Figure 14. Determine the distance between the general average and the individual average of an ROI and if it falls within the threshold declared by the user, then it is considered valid (a first type of valid).
Step 4,
Figure 15, is where all this data is compiled and put into the main program under the name of the type of machine it is for: there the program has defined a threshold of 50% at 5 s; basically, if for a period of 5 s of validation an ROI is validated in 50% of all frames, then it is considered truly valid; otherwise, it is displayed in a window which fuses are problematic.
Figure 16 illustrates the output of the final validation step. The left pane displays type-specific HSV thresholds and validation results per region of interest (ROI), while the right pane highlights incorrectly validated fuses marked in red with corresponding labels (e.g., “AvgOK0”). This comparison allows for clear identification of faulty placements in real time based on a time-sequence confidence analysis.
At the end of each validation, it is retrieved from ThingWorx if the fuse box model has changed, and the program also changes the data set.
This multi-step approach ensures robust and adaptive validation by combining color segmentation, pixel-level tolerancing, and temporal filtering—critical components for smart manufacturing under the Industry 5.0 paradigm.
5. Results
The smart workflow implemented for fuse box validation attained a 95% accuracy rate, thereby validating the system’s reliability in distinguishing between correct and incorrect fuse placements. In multiple test runs using two different fuse box models, the outcome remained consistent. Even with very little background noise or fluctuating lighting, the vision-based module’s use of HSV filtering and ROI-based detection proved successful in differentiating between fuse types and colors.
The fuse validation process under controlled illumination conditions is shown in
Figure 17, which also shows improved color segmentation and improved detection accuracy. The confidence scores for each Region of Interest (ROI) are displayed in the system output on the left; red rectangles indicate detection errors, and green rectangles indicate fuses that were placed correctly. Through the application of HSV filtering and temporal averaging, the scores of “2.67%” and “5.95%” quantitatively represent the degree of alignment between the expected and actual fuse color profiles. A high-resolution camera view of the fuse’s precise location is displayed on the right for visual verification. This dual-view system effectively eliminates false positives caused by glare, shadows, and minor color variations, ensuring perfect synchronization between virtual analysis and real-world conditions. The example demonstrates the system’s robustness under optimal lighting conditions and highlights the significance of illumination quality for visual assessment.
Figure 18 demonstrates a validation failure caused by a missing or improperly placed fuse. The Region of Interest (ROI) displayed a red status and obtained a confidence score of AvgOK: 0, indicating that no HSV-matching pixels were identified within the 5-s validation period. The ThingWorx interface quickly displayed that the ROI was invalid. Errors occasionally arose from transparent or misaligned fuses, making the system’s reliable identification of these issues crucial for sustaining autonomous quality assurance. The validation method utilized time-sequenced image frames, enabling the system to statistically verify the presence of the fuse with a confidence threshold of 50% over a period of 5 s. This method alleviated transient visual disruptions, such as obstructions, illumination discrepancies, or slight camera shifts. The findings illustrate the efficacy of temporal filtering and dynamic thresholding in detecting erroneous insertions or absent components, thus guaranteeing substantial reliability in real-time fuse verification.
A series of validation experiments were conducted on two different fusebox types, each subjected to 100 independent trials under controlled conditions. The results are in
Table 1. The system identified 190 of 200 fuse boxes with 95% accuracy. The layout and visual complexity of the fuse configuration affected detection robustness, with Type 1 fuse boxes performing somewhat better at 97% and Type 2 at 93%. The solution also reduced manual intervention time by 85% by automating most assembly verification processes. The transfer of damaged boxes to a correction zone for operator evaluation showed Industry 5.0-compliant workflow segmentation and traceability.
6. Discussion
Compared to traditional manual inspection processes, the proposed automated workflow offers substantial improvements in both speed and accuracy. The use of Onshape for CAD-based modeling allowed precise virtual definition of fuse positions and types, which directly informed the ThingWorx-based decision logic and robotic actions. The integration between these platforms and the computer vision module ensured synchronized data flow and reliable decision-making.
The HSV segmentation method is susceptible to environmental conditions, like lighting intensity and reflections; hence, this study incorporates various mitigation measures. By dynamically calculating HSV thresholds based on the average color values of all designated ROIs and implementing adjustable tolerance limits, the system adapts to fluctuations in light diffusion and color consistency. The application of a 5-s time-windowed validation reduces the likelihood of false negatives caused by transitory oscillations. Regulated artificial illumination is advantageous for robotic assembly cells in production environments. Typically, enclosures or stationary lighting systems are employed to mitigate unpredictable visual noise. Given the outlined conditions, the proposed method is both scalable and reliable in industrial environments. Future advances may improve robustness via adaptive illumination control or machine learning-based segmentation to more efficiently handle transparent or highly reflective components.
Although chosen for its user-friendliness and reliability under controlled illumination, further research will evaluate HSV-based segmentation against RGB color space, depth-based analysis, and AI-based segmentation to determine the comprehensive performance and trade-offs. Alternative color spaces, such as LAB or YCbCr, provide enhanced resilience to variations in illumination; however, they require more intricate calibration processes, which may necessitate further exploration.
Figure 16 and
Figure 17 demonstrate that the computer vision system operates not merely on static thresholds; rather, it assesses the presence of fuses dynamically over time. This temporal validation method improves dependability under many illumination situations and fuse types, including partially reflected or transparent variants.
The results indicate that the system can deliver real-time visual feedback, identifying errors as they arise and facilitating corrections through manual or automated feedback loops in subsequent iterations.
Although the performance was generally exceptional, several concerns were noted. Transparent fuses exhibited significant calibration challenges due to their low color saturation, which affected the precision of HSV filtering. Inconsistent ambient lighting may result in false negatives, even when tolerance levels are established. Future editions may address these limitations by integrating more adaptable color calibration algorithms and improved lighting solutions.
The system was evaluated on two different fuse box configurations, each undergoing 100 independent validation cycles. As shown in
Table 1 and
Figure 19, the overall accuracy across both types reached 95%, with Type 1 fuse boxes achieving a slightly higher success rate (97%) compared to Type 2 (93%). A total of 190 out of 200 fuse boxes were correctly classified, while 10 were misclassified (
Figure 20) due to visual noise, inconsistent fuse appearances, or HSV mismatches.
We used the Wilson score method for binomial proportions to calculate 95% confidence intervals (CIs) in order to assess the statistical significance of the reported detection accuracies in
Table 1. When sample sizes are moderate or the estimated proportions are near 0% or 100%, as in this study, this approach is more accurate than the conventional Wald approximation.
For a binary classification task (e.g., correct or incorrect fuse detection), the Wilson score interval for a success proportion , where:
x is the number of successful outcomes,
n is the total number of trials,
z is the critical value from the standard normal distribution (for 95% confidence, z = 1.96)
CI is calculated as
This formula adjusts the interval by accounting for the uncertainty of small samples and keeps the bounds within the valid range of [0, 1].
The value
z = 1.96 corresponds to the standard normal distribution, from which approximately 95% of values lie within ±1.96 standard deviations around the mean. Thus, the interval defines a range in which we can be 95% confident that the true detection accuracy lies. The results of calculations are summarized in
Table 1.
The difference in performance between the two types can be attributed to structural and chromatic complexity. Type 2 fuse boxes include a higher number of similar-colored fuses placed in denser layouts, increasing the likelihood of misclassification under suboptimal lighting. These results confirm the robustness of the proposed method while also highlighting its sensitivity to color variation and physical fuse alignment—factors that may be mitigated through improved calibration or by introducing depth-aware vision in future iterations.
The system’s modularity and scalability were validated by testing two different fuse box models, and results suggest the workflow can be extended to other types of assemblies with minimal adaptation. The ThingWorx interface, designed for intuitive control, allowed non-expert users to initiate and supervise operations efficiently, demonstrating the approach’s usability and industrial readiness.
The chosen 5-s validation interval was derived from empirical observations during experimentation and provides an effective equilibrium between latency and detection reliability. Nonetheless, given that this parameter is adjustable, subsequent research will encompass a sensitivity analysis to ascertain the best length across diverse environmental and system variables.
This study serves as a proof-of-concept implementation aimed at demonstrating the viability of an HSV-based validation system incorporated inside a digital twin architecture. The 50% confidence criterion utilized in the 5-s time-based validation window was selected heuristically, based on empirical data, to balance robustness and responsiveness. The system demonstrated robust performance under test settings; however, employing a more objective method, such as Receiver Operating Characteristic (ROC) curve analysis, would provide a statistically rigorous means to ascertain the ideal threshold. This study is the first step in a longer research project that will analyze these approaches in subsequent studies to enhance performance and adaptability in a range of industrial settings.
Preliminary investigations indicated that the average system reaction time from CAD input to robot execution trigger was under 500 milliseconds. To guarantee real-time reliability and pinpoint potential improvement opportunities, future initiatives will encompass comprehensive latency assessments for each communication link (CAD–ThingWorx–Robot).
Two distinct types of fuse boxes were employed to validate the system; additional testing with a broader array of geometries, densities, and fuse colors is anticipated in future endeavors to evaluate scalability and generalizability across various configurations.
Subsequent research will investigate applications such as PCB assembly and multi-component sensor modules to further substantiate the system’s generalizability.