Category Theory Framework for System Engineering and Safety Assessment Model Synchronization Methodologies
Round 1
Reviewer 1 Report
- The authors do not provide enough comparison detail with existing research, which simulation can verify for the actual experiments.
- The figures are not precise and are not presented based on the journal requirements.
- The contributions are low for this journal.
Author Response
Summary of Changes and Response to Reviewers
We thank the reviewers for their insightful comments to help us improve our work. In the revised version of the article, modifications are highlighted in blue color. Our answers to reviewers are highlighted in green in this document.
Reviewer 1
Reviewer 1 notes that the introduction and references must be improved. He insists that the authors do not provide enough comparison detail with existing research.
We modified the introduction to emphasize the risks of inconsistencies between MBSE and MBSA models and compare the categorical framework with the mathematical definition given along with SmartSync.
We also added more detail in the mathematical descriptions of comparison models in section 2.2. A justification for the choice of category theory was also added, as category theory combines the benefits of graph and set theory and provides a good representation of composition and abstraction.
Reviewer 1 notes that the methods must be improved.
We added better exemplification of the mathematical definitions in section 3.1 to help with the readability of the methods.
Reviewer 1 states that the figures are not precise and are not presented based on the journal requirements.
We corrected the captions of figures 13, 14, and 15 to comply with the journal requirements and translated the models used in table 1 and figure 17 to English so that no French words appear anymore in the figures.
Reviewer 2
Reviewer 2 states that “chapter 2 and 3 represent an expert-specific work and is not suitable for the general reader. From the reader it is expected to have a broad knowledge of modelling basics such as SysML, S2ML and SAML, knowledge of techniques and modelling-specific tools such as CESAM, Figaro, AltaRica, Modelica and so on. There are further references given, but not very helpful for non-experts.”
We added more general presentations of system engineering and safety assessment at the beginning of section 2.1 and a more general presentation of category theory at the beginning of section 2.3.1 to make them more accessible to a less specialized audience.
We also added more exemplification of the concepts defined in section 3.1 to make the definition more understandable.
Reviewer 2 states that references are checked partly.
We added DOI or HAL links to all references when available.
Reviewer 3
Reviewer 3 states that although the article fits with the topics of the special issue, there is no artificial intelligence content.
We understand the attractiveness of Artificial Intelligence in this topic, as it provides excellent tools to detect patterns and compare them. And therefore could provide significant help to model comparison.
However, our work is placed in the context of safety assessment of critical systems, which requires certification of the products by safety authorities. Because the AI field currently faces the challenge of explainability and trust, we do not believe that AI has yet reached the point where it could be accepted for certification purposes. We hope that these challenges may soon be overcome.
In the introduction, we added the point of the need for trusted methodologies in the certification process.
Reviewer 3 notes that it would be helpful to compare the outcomes produced with the author’s framework and what it could be without, to appreciate the benefits the framework brings.
We added a discussion at the beginning of section 5 to enlighten the benefits of the mathematical framework.
Section 5, in its globality, discusses how the framework can help improve the results obtained in the study case.
Reviewer 3 notes that references can be improved.
We added more detail in the mathematical descriptions of comparison models in section 2.2. A justification of the choice of category theory was also added, as category theory combines the benefits of graph and set theory, along with providing a good representation of composition and abstraction.
Author Response File: Author Response.pdf
Reviewer 2 Report
The manuscript gives a compressive definition of model synchronisation methods for the model-based system engineering and the model-based safety assessment domains. In a case study the derived methods and concepts are explained on an fixed-wing drone application.
The manuscript has a clear structure, a well-explained problem and a sophisticated introduction to the state of the art and the mathematical description of system modelling.
Chapter 2 and 3 represent an expert-specific work and is not suitable for the general reader. From the reader it is expected to have a broad knowledge of modelling basics such as SysML, S2ML and SAML, knowledge of techniques and modelling-specific tools such as CESAM, Figaro, AltaRica, Modelica and so on. There are further references given, but not very helpful for non-experts. This background is needed again in chapter 5 to follow the “Discussion” of the presented work and the mathematical framework to assess the consistency between structural models.
I do not believe that I can give a high-quality assessment for the content of these three chapters in the manuscript, because the presented research content does not fit in my main field of knowledge. The manuscript requires from the reader a deep knowledge in mathematical description of system engineering and safety assessment models.
Chapter 4 gives a more easy-to-read introduction of the case study to synchronize the differences and inconsistences between MBSE and MBSA models.
The references are checked partly, all checked links are available and correct.
In general, the manuscript provides an important topic of system modelling aspects. The manuscript could be, without an assessment from my side of the theoretical mathematical models in general, released for publication.
Author Response
Summary of Changes and Response to Reviewers
We thank the reviewers for their insightful comments to help us improve our work. In the revised version of the article, modifications are highlighted in blue color. Our answers to reviewers are highlighted in green in this document.
Reviewer 1
Reviewer 1 notes that the introduction and references must be improved. He insists that the authors do not provide enough comparison detail with existing research.
We modified the introduction to emphasize the risks of inconsistencies between MBSE and MBSA models and compare the categorical framework with the mathematical definition given along with SmartSync.
We also added more detail in the mathematical descriptions of comparison models in section 2.2. A justification for the choice of category theory was also added, as category theory combines the benefits of graph and set theory and provides a good representation of composition and abstraction.
Reviewer 1 notes that the methods must be improved.
We added better exemplification of the mathematical definitions in section 3.1 to help with the readability of the methods.
Reviewer 1 states that the figures are not precise and are not presented based on the journal requirements.
We corrected the captions of figures 13, 14, and 15 to comply with the journal requirements and translated the models used in table 1 and figure 17 to English so that no French words appear anymore in the figures.
Reviewer 2
Reviewer 2 states that “chapter 2 and 3 represent an expert-specific work and is not suitable for the general reader. From the reader it is expected to have a broad knowledge of modelling basics such as SysML, S2ML and SAML, knowledge of techniques and modelling-specific tools such as CESAM, Figaro, AltaRica, Modelica and so on. There are further references given, but not very helpful for non-experts.”
We added more general presentations of system engineering and safety assessment at the beginning of section 2.1 and a more general presentation of category theory at the beginning of section 2.3.1 to make them more accessible to a less specialized audience.
We also added more exemplification of the concepts defined in section 3.1 to make the definition more understandable.
Reviewer 2 states that references are checked partly.
We added DOI or HAL links to all references when available.
Reviewer 3
Reviewer 3 states that although the article fits with the topics of the special issue, there is no artificial intelligence content.
We understand the attractiveness of Artificial Intelligence in this topic, as it provides excellent tools to detect patterns and compare them. And therefore could provide significant help to model comparison.
However, our work is placed in the context of safety assessment of critical systems, which requires certification of the products by safety authorities. Because the AI field currently faces the challenge of explainability and trust, we do not believe that AI has yet reached the point where it could be accepted for certification purposes. We hope that these challenges may soon be overcome.
In the introduction, we added the point of the need for trusted methodologies in the certification process.
Reviewer 3 notes that it would be helpful to compare the outcomes produced with the author’s framework and what it could be without, to appreciate the benefits the framework brings.
We added a discussion at the beginning of section 5 to enlighten the benefits of the mathematical framework.
Section 5, in its globality, discusses how the framework can help improve the results obtained in the study case.
Reviewer 3 notes that references can be improved.
We added more detail in the mathematical descriptions of comparison models in section 2.2. A justification of the choice of category theory was also added, as category theory combines the benefits of graph and set theory, along with providing a good representation of composition and abstraction.
Author Response File: Author Response.pdf
Reviewer 3 Report
This article proposes a category theory framework for system engineering and safety assessment models, focusing on the mathematical aspect of the synchronization methodologies. In the proposal a tool is described, helping design and validate model synchronizations methodologies mathematically.
The article fits the topics of the special issue, while Artificial Intelligence contents have not be found.
The article is well structured with a sound problem setting, then explores the SotA, then the proposal is well explained and demonstrated. Further an application in a study case is presented to show the coming benefits. Finally a discussion on points of interest and results. The article present some novel contents, the potential approached challenges could be better described.
The study uses a drone for blood delivery to different hospitals and clinics across a large area of about 80 km. The blood packages are parachuted down to the delivery site.
The system has been modelled, then subject of safety assessment, functional scenario and Multi-physics behavior. Then SmartSync has been used to synchronize these three models with the architecture models.
It would be useful to have a comparison between the outcomes produced with your framework and what it could be iwithout, in order to appreciate the benefits your framework brings in.
The Discussion section describes some limits of the present work and potential improvements, particularly the case of connections, that are not currently considered by Smartsync, so considering the connections and the differences in abstraction level between models.
While the proposed mathematical framework is not new, it offers a better interpretation of the methodologies, a mathematical proof and speed-up the learning curve.
Overall the paper is sound and consistent, perhaps a bit long, written in a very good english, just few typos.
It could be interesting to have more suggestions on the potential use of the framework and other fields of application, and how AI and ontologies could be helpful in improving the framework.
Referencies seem adequate, perhpas they can improved a bit.
Author Response
Summary of Changes and Response to Reviewers
We thank the reviewers for their insightful comments to help us improve our work. In the revised version of the article, modifications are highlighted in blue color. Our answers to reviewers are highlighted in green in this document.
Reviewer 1
Reviewer 1 notes that the introduction and references must be improved. He insists that the authors do not provide enough comparison detail with existing research.
We modified the introduction to emphasize the risks of inconsistencies between MBSE and MBSA models and compare the categorical framework with the mathematical definition given along with SmartSync.
We also added more detail in the mathematical descriptions of comparison models in section 2.2. A justification for the choice of category theory was also added, as category theory combines the benefits of graph and set theory and provides a good representation of composition and abstraction.
Reviewer 1 notes that the methods must be improved.
We added better exemplification of the mathematical definitions in section 3.1 to help with the readability of the methods.
Reviewer 1 states that the figures are not precise and are not presented based on the journal requirements.
We corrected the captions of figures 13, 14, and 15 to comply with the journal requirements and translated the models used in table 1 and figure 17 to English so that no French words appear anymore in the figures.
Reviewer 2
Reviewer 2 states that “chapter 2 and 3 represent an expert-specific work and is not suitable for the general reader. From the reader it is expected to have a broad knowledge of modelling basics such as SysML, S2ML and SAML, knowledge of techniques and modelling-specific tools such as CESAM, Figaro, AltaRica, Modelica and so on. There are further references given, but not very helpful for non-experts.”
We added more general presentations of system engineering and safety assessment at the beginning of section 2.1 and a more general presentation of category theory at the beginning of section 2.3.1 to make them more accessible to a less specialized audience.
We also added more exemplification of the concepts defined in section 3.1 to make the definition more understandable.
Reviewer 2 states that references are checked partly.
We added DOI or HAL links to all references when available.
Reviewer 3
Reviewer 3 states that although the article fits with the topics of the special issue, there is no artificial intelligence content.
We understand the attractiveness of Artificial Intelligence in this topic, as it provides excellent tools to detect patterns and compare them. And therefore could provide significant help to model comparison.
However, our work is placed in the context of safety assessment of critical systems, which requires certification of the products by safety authorities. Because the AI field currently faces the challenge of explainability and trust, we do not believe that AI has yet reached the point where it could be accepted for certification purposes. We hope that these challenges may soon be overcome.
In the introduction, we added the point of the need for trusted methodologies in the certification process.
Reviewer 3 notes that it would be helpful to compare the outcomes produced with the author’s framework and what it could be without, to appreciate the benefits the framework brings.
We added a discussion at the beginning of section 5 to enlighten the benefits of the mathematical framework.
Section 5, in its globality, discusses how the framework can help improve the results obtained in the study case.
Reviewer 3 notes that references can be improved.
We added more detail in the mathematical descriptions of comparison models in section 2.2. A justification of the choice of category theory was also added, as category theory combines the benefits of graph and set theory, along with providing a good representation of composition and abstraction.
Round 2
Reviewer 1 Report
Overall, I believe the paper improved.
- It is recommended to zoom out models and abstracted models in figure 1.
- It is recommended to provide a comparison table based on various factors among MBSE, MBSA and the proposed model synchronization approach.
Author Response
Summary of Changes and Response to Reviewers
We thank the reviewers for their insightful comments on the first revision of our paper to help us improve our work. In the second revision of the article, modifications are highlighted in blue color. Our answers to reviewers are highlighted in green in this document.
Reviewer 1
Reviewer 1 notes that Moderate English changes are required
We corrected some grammatical, spelling and style mistakes. In particular the lines 446 to 449 were reformulated for better readability.
It is recommended to zoom out models and abstracted models in figure 1.
Figure 1 was resized and modified to improve readability by changing the size of the text compared to the illustrations of models. The models and abstracted models are smaller while the text is bigger.
We also modified Figure 11 by increasing the size of some of the text to improve readability.
It is recommended to provide a comparison table based on various factors among MBSE, MBSA and the proposed model synchronization approach.
We have not yet encountered such a mathematical formalization of consistency between heterogeneous models that is done in a similar way to ours (i.e., through an axiomatic and a formal definition of a consistency relation). We added this remark in the Synchronization methodologies state of the art section.
We believe that creating such a comparison with similar frameworks that might be unrelated to the system engineering field would be a significant addition to our approach, and we intend to do so in further work. We added discussions of this perspective in the discussions section.