Next Article in Journal
Numerical Modeling and Optimization of Nomex Honeycomb Core Milling: Influence of Longitudinal and Longitudinal–Torsional Ultrasonic Vibrations
Previous Article in Journal
Numerical Simulation and Experimental Study of Piston Rebound Energy Storage Characteristics for Nitrogen-Hydraulic Combined Impact Hammer
 
 
Article
Peer-Review Record

Smart Machine Vision System to Improve Decision-Making on the Assembly Line

by Carlos Americo de Souza Silva * and Edson Pacheco Paladini
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Submission received: 11 December 2024 / Revised: 14 January 2025 / Accepted: 23 January 2025 / Published: 27 January 2025
(This article belongs to the Topic Smart Production in Terms of Industry 4.0 and 5.0)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

This paper presents the creation of a hybrid smart-vision inspection system that enhances the efficiency, accuracy, and reliability of PCB and chassis inspections in the automotive industry. The system integrates machine vision with traditional vision sensors, enabling automated detection of missing or incorrect components, threads, and thermal paste applications. The use of machine vision improves the Failure Mode and Effects Analysis (FMEA) process by enhancing defect detection capabilities, moving from low (7) to high (2) detection levels. This advancement leads to a more robust and standardized inspection process, reducing human errors, production costs, and product defects, while supporting the transition to Industry 4.0 and smart manufacturing. This interesting topic is within the scope of the Machines Journal.  However, I have a few comments. Please refer to them.

1)       Describe the collecting data process from machine vision, i.e., clearly identify and justify the data sources.

2)       How are the images preprocessed before analysis (e.g., noise reduction, binarization)?

3)       How does data quality ensure the measures? (e.g., resolution, brightness, and contrast) How are issues such as lighting variability or sensor inconsistencies handled?

4)       How is the storage and handling of image data managed (e.g., format, volume, security)?

5)       How are the validity and reliability of the image data ensured? Are repeated measurements or cross-validation methods used?

6)       Provide a more thorough explanation of how to identify fiducial points using the Hough Transform. Are alternative methods considered?

7)       It is essential to specify the source of the images used for inspection (e.g., real production data, synthetic data, or a combination). This clarity will help the reader understand the robustness of the proposed method.

8)       It would be beneficial to provide quantitative results on the efficiency and accuracy of the system. Include confusion matrix metrics (precision, recall, F1-score) to demonstrate the system’s performance.

9)       The objective to "develop a new methodology" is too broad. Narrow the objective to a specific, measurable goal (e.g., to reduce false negatives in PCB inspection by a certain percentage).

10)   Clarify the steps taken to test the system. Was there a control group (manual inspection) used as a baseline for comparison? This comparison should be quantitative.

11)   The setup process is discussed but lacks sufficient technical detail for reproducibility. Provide the configuration of cameras, sensors, and environmental conditions (e.g., lighting conditions, camera model) used during the inspection process.

12)   The problem statement should be more precise. The current version introduces several issues (e.g., miniaturization of PCB components, manual inspection errors) without prioritizing them. Focus on one or two key issues.

13)   Provide a detailed description of the experimental setup. This includes sensor type, model of the camera, vision system hardware, and computational requirements.

14)   It is crucial to address the environmental conditions during the inspection process (e.g., lighting, vibration, etc.). These factors could significantly impact system performance.

15)   Clearly identify the control variables (unchanging elements) and dependent variables (measured outcomes). For example, were the PCB samples randomized?

16)   The image acquisition process should be more thoroughly detailed. The paper should explain how image noise, occlusions, or reflections are managed, as these are common challenges in vision-based inspection.

17)   The results section should connect experimental findings to the research objectives. Discuss whether the use of machine vision reduced false negatives or false positives and by how much.

18)   Compare the proposed method with manual inspection or alternative automated approaches. Use statistical tests to show significant improvements.

19)   Explicitly state the limitations of the proposed system (e.g., challenges with reflective materials, as mentioned) and how they could be addressed in future work.

20)   The conclusion should clearly summarize key contributions, such as improvements in detection rates, cost savings, or operational efficiencies.

21)   Provide insights into technical aspects that were pivotal for success (e.g., algorithm choice, choice of hardware).

22)   Discuss potential future research directions. For example, the application of AI-based deep learning models to improve detection accuracy could be explored. Suggest further tests under different environmental conditions to ensure system robustness.

23)   Reduce redundancy in explanations, especially in the introductory and background sections.

 

24)       Terms like "vision sensors" and "vision system" are used interchangeably. Clarify distinctions if any exist, or use consistent terminology throughout the document.

 

Comments on the Quality of English Language

The text has several grammatical issues, including sentence structure and verb tense. For instance, “Machine vision has been gaining ground” could be revised to “Machine vision is increasingly being adopted.”

Author Response

Response to Reviewer1 X Comments

 

1. Summary

 

 

Thank you very much for taking the time to review this manuscript. Please find the detailed responses below and the corresponding revisions/corrections highlighted/in track changes in the re-submitted file.

2. Questions for General Evaluation

Reviewer’s Evaluation

Response and Revisions

Does the introduction provide sufficient background and include all relevant references?

Yes/Can be improved/Must be improved/Not applicable

The introduction has been improved and 5 new references have been added to the revised article.

Are all the cited references relevant to the research?

Yes/Can be improved/Must be improved/Not applicable

All citations are relevant to the research development and 9 new ones were added in total.

Is the research design appropriate?

Yes/Can be improved/Must be improved/Not applicable

The research is adequate and meets the scope of the paper on intelligent machines with application to the inspection model.

Are the methods adequately described?

Yes/Can be improved/Must be improved/Not applicable

Item 2.1 was added, describing how bibliometric research on the topic was carried out and how articles were selected and how growth is based on Scopus and Web of Science.

Are the results clearly presented?

Yes/Can be improved/Must be improved/Not applicable

A framework of the proposed system (figure 9) was added and described.

Also, improve the results information and update the result illustration for approved and rejected pieces.

Are the conclusions supported by the results?

Yes/Can be improved/Must be improved/Not applicable

The conclusion was improved supporting the results obtained.

3. Point-by-point response to Comments and Suggestions for Authors

Comments 1: This paper presents the creation of a hybrid smart-vision inspection system that enhances the efficiency, accuracy, and reliability of PCB and chassis inspections in the automotive industry. The system integrates machine vision with traditional vision sensors, enabling automated detection of missing or incorrect components, threads, and thermal paste applications. The use of machine vision improves the Failure Mode and Effects Analysis (FMEA) process by enhancing defect detection capabilities, moving from low (7) to high (2) detection levels. This advancement leads to a more robust and standardized inspection process, reducing human errors, production costs, and product defects, while supporting the transition to Industry 4.0 and smart manufacturing. This interesting topic is within the scope of the Machines Journal.  However, I have a few comments. Please refer to them.

1)      Describe the collecting data process from machine vision, i.e., clearly identify and justify the data sources.

Response 1: Thank you for pointing this out. During the assembly phase of the board on the chassis, where the screwing and application of thermal paste is carried out, the operator performs the inspection manually (visually with the eyes). The preventive tool (FMEA) classified the detection as 7, increasing the risk factor based on the number of defects found. After the automatic implementation of the system, the detection improved (reduced) according to the tool (FMEA), and the defects were eliminated, that is, no defects advanced to the next assembly station, due to the high level of detection by the camera system.

During the implementation phase, a study was carried out to use a Machine Learning algorithm (SVM - Support Vector Machine) that is not described in this article. To test in this type of condition, it was necessary to create a database of 760 images of OK and NOK parts for training and another 190 images for validation, and 190 images for testing. It took almost 3 days of production to have a base of images to train an algorithm. Then the concept with computer vision was evaluated where there was no need for a large base to carry out training.

Comments 2: How are the images preprocessed before analysis (e.g., noise reduction, binarization)?

Response 2: Agree. To implement the system using the vision builder tool, the images were treated from color to grayscale with binarization, so that the inspection strategies obtained better results.

Comments 3: How does data quality ensure the measures? (e.g., resolution, brightness, and contrast) How are issues such as lighting variability or sensor inconsistencies handled?

Response 3: In order to ensure the reliability of the images, an 8-megapixel camera was installed for the vision system to maintain quality. A lighting system was installed to ensure the inspection standard (described on page 9 - illustrated in Figure 9)

Vision Sensor datasheet is attached just for reference. Vision Sensor is a low-cost solution that has the illumination and camera integrated.

 

Comments 4: How is the storage and handling of image data managed (e.g., format, volume, security)?

Response 4: All images are stored in a Storage system with 8x Hard Disk with 10Terabyte. The company keeps the saved image for one year, which is a basis for a possible zero-kilometer fault that can be used in field fault analysis. 20 GB is data saved for one month of production. Working in 3 shifts (34 pieces per hour).

Comments 5: How are the validity and reliability of the image data ensured? Are repeated measurements or cross-validation methods used?

Response 5: After defining the best methodology, the hardware that maintains the same repeatability and reproducibility of each product/image that passes through the chassis screwing station and gap filler application was defined.

Comments 6: Provide a more thorough explanation of how to identify fiducial points using the Hough Transform. Are alternative methods considered?

Response 6: The plate fiducial is a point that does not change and serves as the basis for starting all inspection strategies. Other points were evaluated, such as components, but due to small possibilities of variation during the component assembly phase, the PCB hole was defined as a reference and the model that maintained the stability of the inspections.

After deciding that the hole would be the reference, we decided to use the hough transform due to simpler and most effective method to detect any shape or straight lines using the OpenCV system (based on Vision builder)

Comments 7: It is essential to specify the source of the images used for inspection (e.g., real production data, synthetic data, or a combination). This clarity will help the reader understand the robustness of the proposed method.

Response 7: The images used for the analysis and implementation of the system are real images from the assembly process of car radio production. For the development of the strategies, they were analyzed offline and after implementation and fine-tuning, they were validated with images generated by series production.

Comments 8: It would be beneficial to provide quantitative results on the efficiency and accuracy of the system. Include confusion matrix metrics (precision, recall, F1-score) to demonstrate the system’s performance.

Response 8: After adjustment, the accuracy of the inspection strategy is 100% using Vision Builder.

A test was performed with a machine learning algorithm (SVM), but not described in this paper, since the methodology was not implemented.

Using SVM, the results are below:

Support vector machines are a supervised machine learning technique used in classification and regression problems. SVMs seek to find an optimal hyperplane to separate a data set.

SVM proposes a hyperplane that separates the data set belonging to each class so that the data characteristics are on one side of the hyperplane. Throughout this process, the SVM maximizes the distance between the hyperplane of each class so that the separation margin is the smallest distance between the points of the hyperplane of each class.

Definition of the optimal hyperplane.

Figure illustrates the maximum margin separator, represented by the solid red line, and the margins, represented by the dashed lines. The support vectors are the holes highlighted by the dashed circle and the connector terminals highlighted by the green squares closest to the separator.

To create the failure mode detector, we used the support vector machine (SVM) algorithm in the first experiment with a linear kernel since the 600 images in the database are linearly separable. The classifier performs the classification of each data sample to classify each training sample into its corresponding NOK or OK class. After completion, the classifier will be able to make new predictions of failure modes in the latest samples of PCB images.

Confusion Matrix

 

Is there an image failure mode?

True

False

Was the algorithm detecting the failure mode in the image?

 

True

 

True Positive

(TP)

False Negative

(FN)

 

False

 

False Positive

(FP)

True Negative

(TN)

The experiment metrics, obtained from the SVM classification, are demonstrated through the confusion matrix and learning curve generated after training the failure mode model.

 

Confusion Matrix for SVM

The TP variable demonstrates the accuracy of the failure mode classification of PCBs classified as NOK with a precision of 1.00. The TN variable represents the classification accuracy of OK PCBs with a precision of 0.98. The FP represents the number of OK PCBs classified as NOK with a precision rate of 0.02, demonstrating a small error in the prediction. The FN demonstrates the number of NOK PCBs classified as OK with a precision of 0.00, demonstrating that the classifier did not make a mistake in this prediction.

The classifier performance metrics were obtained using the confusion matrix data of the trained SVM. The most used metrics for evaluating machine learning models are learning and ROC curves, accuracy, specificity, and sensitivity.

 

SVM model evaluation metrics.

The accuracy of the model reflects its performance during training and learning. The accuracy calculated is the total number of correct answers divided by the total number of images in the database, demonstrating the model's ability to make correct predictions. The accuracy of the SVN was 99%. Precision considers only true positive values, preventing false positive values from introducing biased errors in the result. The recall metric indicates the frequency with which the image is correctly identified as belonging to a given class. The f1-score, the harmonic mean between precision and recall, evaluates the quality of the model's training. This metric is fundamental in imbalanced datasets.

 

Learning curve for SVM.

Figure shows the accuracy of the model's learning curve. It is noticeable that the training accuracy increases with the number of images used in the algorithm. As we approach 93 tested images, it is evident that the accuracy has increased, remaining consistent and stable, with an accuracy of 99% at the end of training.

Comments 9: The objective to "develop a new methodology" is too broad. Narrow the objective to a specific, measurable goal (e.g., to reduce false negatives in PCB inspection by a certain percentage).

Response 9: "the new methodology" was considered due to bibliometric research on the topic, which was added in item 2.1, which was not possible find papers in the Scopus and Web of Science databases using this approach of methods with computer vision and vision sensor as inspections simultaneously.

 

Comments 10: Clarify the steps taken to test the system. Was there a control group (manual inspection) used as a baseline for comparison? This comparison should be quantitative.

Response 10: The previous method used was visual inspection by the operator (with eyes), with the risk of real failures being sent to the customer. Also identified by the preventive tool for failure mode detection (FMEA). according to the AIAG-VDA FMEA Manual 1st Edition (2019).

Comments 11: The setup process is discussed but lacks sufficient technical detail for reproducibility. Provide the configuration of cameras, sensors, and environmental conditions (e.g., lighting conditions, camera model) used during the inspection process.

Response 11: Technical information has been added on page 9 (figure 9) as the framework for the smart vision proposed.

Comments 12: The problem statement should be more precise. The current version introduces several issues (e.g., miniaturization of PCB components, manual inspection errors) without prioritizing them. Focus on one or two key issues.

Response 12: To emphasize this point, since this is an inspection of miniaturized items, manual inspection has a risk of failure.

This justifies the research using an automated approach to mitigate failures.

Comments 13: Provide a detailed description of the experimental setup. This includes sensor type, model of the camera, vision system hardware, and computational requirements.

Response 13: Technical information has been added on page 9 (figure 9) as the framework for the smart vision proposed. Sensor Type IV-HG500GA and Camera with 8-megapixel.

Comments 14: It is crucial to address the environmental conditions during the inspection process (e.g., lighting, vibration, etc.). These factors could significantly impact system performance.

Response 14: The framework with more information about the hardware used, such as cameras and lighting systems, was added.

Comments 15: Clearly identify the control variables (unchanging elements) and dependent variables (measured outcomes). For example, were the PCB samples randomized?

Response 15: The PCBAs were used in the normal process, which includes all the uncontrolled variables of the manufacturing operators. Where there is the involvement of 3 types of people in different shifts performing the same operation. Therefore, maintaining the inspection standard is very important for quality assurance.

Comments 16: The image acquisition process should be more thoroughly detailed. The paper should explain how image noise, occlusions, or reflections are managed, as these are common challenges in vision-based inspection.

Response 16: More relevant information has been added to the proposed system in item 3 (Machine Vision System).

Comments 17: The results section should connect experimental findings to the research objectives. Discuss whether the use of machine vision reduced false negatives or false positives and by how much.

Response 17: After implementing the vision system, the problem related to the lack of gap filler, screws, gasket and threadless hole were eliminated.

The system is validated daily by operators with positive and negative samples.

Comments 18: Compare the proposed method with manual inspection or alternative automated approaches. Use statistical tests to show significant improvements.

Response 18: The possibilities of real failures were eliminated, that is, they cannot advance to the next stations, as the inspection system is robust and is not up to the operator to decide on a manual system.

After the implementation of the system, failures were eliminated. Included in the conclusion as one of the relevant results points.

Comments 19: Explicitly state the limitations of the proposed system (e.g., challenges with reflective materials, as mentioned) and how they could be addressed in future work.

Response 19: The proposed model structure is prepared for approaches using Computer Vision, as well as for future work using machine learning algorithms. It is ready to create training, validation, and testing databases.

Previously, there were no initial conditions, as there were not enough images for any type of training with adequate accuracy for any type of algorithm.

For future work, comparisons can be made with other algorithms with neural networks.

Comments 20: The conclusion should clearly summarize key contributions, such as improvements in detection rates, cost savings, or operational efficiencies.

Response 20: The reliability points generated by the system were added in the conclusion, 5 relevant points were listed:

(1) the FMEA risk priority number was improved;

(2) real failures that were approved by manual inspection due to the operator's mistake were reduced;

(3) traceability control was improved based on saved images;

(4) the production output was maintained with additional automatic inspections;

(5) real failures were eliminated.

Comments 21: Provide insights into technical aspects that were pivotal for success (e.g., algorithm choice, choice of hardware).

Response 21: In the Machine Vision System section, the model that was developed to improve the inspection performance of the object that was researched was described.

Comments 22: Discuss potential future research directions. For example, the application of AI-based deep learning models to improve detection accuracy could be explored. Suggest further tests under different environmental conditions to ensure system robustness.

Response 22: A paragraph was added in the conclusion for future research, using machine learning and deep learning algorithms to enable autonomous learning inspection processes.

Comments 23: Reduce redundancy in explanations, especially in the introductory and background sections.

Response 23: The sections have been revised to be clearer and more objective.

Comments 24: Terms like "vision sensors" and "vision system" are used interchangeably. Clarify distinctions if any exist, or use consistent terminology throughout the document.

Response 24: vision sensors and vision system are different approaches and were added in item 3 of the article to clarify the difference between the models.

4. Response to Comments on the Quality of English Language

Point 1: The text has several grammatical issues, including sentence structure and verb tense. For instance, “Machine vision has been gaining ground” could be revised to “Machine vision is increasingly being adopted.

Response 1: The new version of the article was submitted for review by the MDPI English editing service, where the text was checked and adjusted for grammar and technical terms for academic publication. The certification is attached.

 

 

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

This article proposes developing a hybrid industrial vision system with machine vision and vision sensors to verify  components and 7 screw 14 threads. This research aims to use machine vision to increase inspection reliability in an automated way and reduce non-conformity rates in the manufacturing process on the assembly line of automotive products. My comments are as follows:

Machine vision resolution should be explained and contributed.

Author Response

Response to Reviewer2 X Comments

 

1. Summary

 

 

Thank you very much for taking the time to review this manuscript. Please find the detailed responses below and the corresponding revisions/corrections highlighted/in track changes in the re-submitted file.

2. Questions for General Evaluation

Reviewer’s Evaluation

Response and Revisions

Does the introduction provide sufficient background and include all relevant references?

Yes/Can be improved/Must be improved/Not applicable

The introduction has been improved and 5 new references have been added to the revised article.

Are all the cited references relevant to the research?

Yes/Can be improved/Must be improved/Not applicable

All citations are relevant to the research development and 9 new ones were added in total.

Is the research design appropriate?

Yes/Can be improved/Must be improved/Not applicable

The research is adequate and meets the scope of the paper on intelligent machines with application to the inspection model.

Are the methods adequately described?

Yes/Can be improved/Must be improved/Not applicable

Item 2.1 was added, describing how bibliometric research on the topic was carried out and how articles were selected and how growth is based on Scopus and Web of Science.

Are the results clearly presented?

Yes/Can be improved/Must be improved/Not applicable

A framework of the proposed system (figure 9) was added and described.

Also, improve the results information and update the result illustration for approved and rejected pieces.

Are the conclusions supported by the results?

Yes/Can be improved/Must be improved/Not applicable

The conclusion was improved supporting the results obtained.

3. Point-by-point response to Comments and Suggestions for Authors

Comments 1: This article proposes developing a hybrid industrial vision system with machine vision and vision sensors to verify components and 7 screw 14 threads. This research aims to use machine vision to increase inspection reliability in an automated way and reduce non-conformity rates in the manufacturing process on the assembly line of automotive products. My comments are as follows:

 

Machine vision resolution should be explained and contributed.

 

Response 1: Thank you for pointing this out.

In order to ensure the reliability of the images, an 8-megapixel camera was installed for the vision system to maintain quality. A lighting system was installed to ensure the inspection standard (described on page 9 - illustrated in Figure 9)

Vision Sensor datasheet is attached just for reference. Vision Sensor is a low-cost solution that has the illumination and camera integrated.

 

4. Response to Comments on the Quality of English Language

Point 1:

Response 1: The new version of the article was submitted for review by the MDPI English editing service, where the text was checked and adjusted for grammar and technical terms for academic publication. The certification is attached.

 

 

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

1.     The abstract provides a good overview of the study, but it can be made more concise. Consider focusing on the main findings and implications more succinctly.

2. While comprehensive, the literature review section can be slightly condensed. Focus on the most relevant studies and their direct impact on the current research.

3.     If you use any abbreviation, then spell out the full phrase or term the first time you use it in your paper and include the abbreviation in parentheses. You can use the abbreviation each time after that.

4. Review grammar and syntax for clarity and coherence. Many typos and grammatical errors exist and need to be checked carefully.

5.     The introduction is weak. What is the motivation for doing this research? The authors should mention the shortcomings in the literature and the motivation for doing this research. Also, you have to mention the contribution and objective of your research.

6. The introduction and literature review lack support for related papers on fuzzy sets and their extensions.

7. The consultation should be written again. The authors should provide more details about their research and results. Also, it is suggested to provide more information about the limitations of the research and future work recommendations.

8. How robust and generalizable is the proposed methodology for hybrid machine vision systems across different manufacturing industries?

9. Could there be alternative approaches to linking machine vision outputs with FMEA risk assessment?

10. Has the system been tested under real-world manufacturing conditions? If so, were there challenges in transitioning from a controlled environment to production?

11. Please provide more detail about the algorithms used, such as the Hough transform for fiducial point detection, and their computational efficiency.

12. Please compare the system's results with existing machine vision systems to highlight improvements or trade-offs.  

Author Response

Response to Reviewer3 X Comments

 

1. Summary

 

 

Thank you very much for taking the time to review this manuscript. Please find the detailed responses below and the corresponding revisions/corrections highlighted/in track changes in the re-submitted file.

2. Questions for General Evaluation

Reviewer’s Evaluation

Response and Revisions

Does the introduction provide sufficient background and include all relevant references?

Yes/Can be improved/Must be improved/Not applicable

The introduction has been improved and 5 new references have been added to the revised article.

Are all the cited references relevant to the research?

Yes/Can be improved/Must be improved/Not applicable

All citations are relevant to the research development and 9 new ones were added in total.

Is the research design appropriate?

Yes/Can be improved/Must be improved/Not applicable

The research is adequate and meets the scope of the paper on intelligent machines with application to the inspection model.

Are the methods adequately described?

Yes/Can be improved/Must be improved/Not applicable

Item 2.1 was added, describing how bibliometric research on the topic was carried out and how articles were selected and how growth is based on Scopus and Web of Science.

Are the results clearly presented?

Yes/Can be improved/Must be improved/Not applicable

A framework of the proposed system (figure 9) was added and described.

Also, improve the results information and update the result illustration for approved and rejected pieces.

Are the conclusions supported by the results?

Yes/Can be improved/Must be improved/Not applicable

The conclusion was improved supporting the results obtained.

3. Point-by-point response to Comments and Suggestions for Authors

Comments 1: The abstract provides a good overview of the study, but it can be made more concise. Consider focusing on the main findings and implications more succinctly.

Response 1: The summary has been re-evaluated and relevant points added to make it more concise.

“Technological advances in the production of printed circuit boards (PCBs) are increasing the number of components inserted on the surface. This has led the electronics industry to seek improvements in their inspection processes, often making it necessary to increase the level of automation on the production line. The use of machine vision for quality inspection within manufacturing processes has increasingly supported decision-making in the approval or rejection of products outside of the established quality standards. This study proposes a hybrid smart-vision inspection system with a machine vision concept and vision sensor equipment to verify 24 components and 8 screw threads. The goal of this study is to increase automated inspection reliability and reduce non-conformity rates in the manufacturing process on the assembly line of automotive products using machine vision. The system uses a camera to collect real-time images of the assembly fixtures, which are connected to a CMOS color vision sensor. The method is highly accurate in complex industry environments and exhibits specific feasibility and effectiveness. The results indicate high performance in the failure mode defined during this study, obtaining the best inspection performance through a strategy using Vision Builder for automated inspection. This approach reduced the action priority by improving the failure mode and effect analysis (FMEA) method.”

Comments 2: While comprehensive, the literature review section can be slightly condensed. Focus on the most relevant studies and their direct impact on the current research.

Response 2: More information about the hybrid inspection model was added to improve the research proposal in item 3. Machine Vision System.

In the bibliometric research in the databases, no works were found with the two approaches for inspection on electronic boards.

Comments 3: If you use any abbreviation, then spell out the full phrase or term the first time you use it in your paper and include the abbreviation in parentheses. You can use the abbreviation each time after that.

Response 3: A new review was carried out with the aim of mitigating the point raised. The article was also sent for review to seek new opportunities for improvements in the writing.

 

Comments 4: Review grammar and syntax for clarity and coherence. Many typos and grammatical errors exist and need to be checked carefully.

Response 4: The new version of the article was submitted for review by the MDPI English editing service, where the text was checked and adjusted for grammar and technical terms for academic publication. The certification is attached.

 

Comments 5: The introduction is weak. What is the motivation for doing this research? The authors should mention the shortcomings in the literature and the motivation for doing this research. Also, you have to mention the contribution and objective of your research.

Response 5:

The motivation for doing the research was due to not possible find papers in the Scopus and Web of Science databases using this approach of methods with computer vision and vision sensor as inspections simultaneously during bibliometric research.

Thus, it was seen as an opportunity to do the research and publish this method.

The bibliometric research on the topic, was added in item 2.1.

 

Comments 6: The introduction and literature review lack support for related papers on fuzzy sets and their extensions.

Response 6: The introduction has been improved, also adding the research methodology and how the search in the databases was done.

The sections have been revised to be clearer and more objective.

Comments 7: The consultation should be written again. The authors should provide more details about their research and results. Also, it is suggested to provide more information about the limitations of the research and future work recommendations.

Response 7:

More information about the research method was added and in the conclusion a paragraph related to future work.

The proposed model structure is prepared for approaches using Computer Vision, as well as for future work using machine learning algorithms. It is ready to create training, validation, and testing databases.

Previously, there were no initial conditions, as there were not enough images for any type of training with adequate accuracy for any type of algorithm.

For future work, comparisons can be made with other algorithms with neural networks.

Comments 8: How robust and generalizable is the proposed methodology for hybrid machine vision systems across different manufacturing industries?

Response 8:

The proposed model can be adapted to any inspection process where it is done visually by the operator, especially in manual or semi-automatic assembly processes where the decision is made by the operator, as the system supports decision-making, in addition to maintaining evidence of what was inspected, increasing the confidence level with the customer.

Hybrid approaches are created for dedicated processes, but this basis is generalizable.

Comments 9: Could there be alternative approaches to linking machine vision outputs with FMEA risk assessment?

According to the AIAG-VDA FMEA Manual 1st Edition (2019) is defined that for Detection, if it was using any automatic method (vision, equipment, sensor, and so on.) is defined as rank 2.

(High Detection).

As the focus of the research is on decision-making during inspection, camera systems become more suitable for the research approach.

Comments 10: Has the system been tested under real-world manufacturing conditions? If so, were there challenges in transitioning from a controlled environment to production?

Response 10: The system was implemented in a real manufacturing environment. Inspection is performed after all manual operations have been completed by the operator. Since the inspection system is fast and automatic, there were no barriers to implementation, as the system supports the operator's decision-making.

This makes the process safer and more reliable.

Comments 11: Please provide more detail about the algorithms used, such as the Hough transform for fiducial point detection, and their computational efficiency.

Response 11: The plate fiducial is a point that does not change and serves as the basis for starting all inspection strategies. Other points were evaluated, such as components, but due to small possibilities of variation during the component assembly phase, the PCB hole was defined as a reference and the model that maintained the stability of the inspections.

After deciding that the hole would be the reference, we decided to use the hough transform due to simpler and most effective method to detect any shape or straight lines using the OpenCV system (based on Vision builder)

Comments 12: Please compare the system's results with existing machine vision systems to highlight improvements or trade-offs.

Response 12: The systems available on the market are focused on a specific approach, such as optical inspection machines (AOI) and x-ray inspection machines. They use cameras to acquire images and then perform analyses, training and judgments.

In the case of using a camera system with integrated vision sensors, it needs to be customized according to the process.

In this way, the proposed system can perform several inspections at the same time (taking advantage of the cycle time), without the need to reposition the product.

Technical information has been added on page 9 (figure 9) as the framework for the smart vision proposed. Sensor Type IV-HG500GA and Camera with 8-megapixel.

Vision Sensor datasheet is attached just for reference. Vision Sensor is a low-cost solution that has the illumination and camera integrated.

 

 

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

No further comments

Comments on the Quality of English Language

No further comments

Back to TopTop