A Dynamic Bridge Architecture for Efficient Interoperability Between AUTOSAR Adaptive and ROS2
Round 1
Reviewer 1 Report
Comments and Suggestions for Authors1. It would be valuable to provide a more precise explanation of the interfaces and interactions among the components depicted in Figure 1.
2. The manuscript does not explicitly describe the QoS mapping strategies when a mismatch occurs. It would be helpful to elaborate on the exact procedure followed by the QoS Manager in such cases.
3. The authors are encouraged to clarify the scalability of the proposed framework, related to the simulation results presented in Figures 6 and 7.
4. While latency is evaluated, reliability is another critical QoS metric. Have the authors considered evaluating additional metrics such as reliability or packet loss?
5. Since the simulation setup is based on vsomeip rather than a practical ECU, the reviewer raises the question of whether differences or constraints (e.g., memory, CPU, synchronization limitations) may arise in real-world applications.
6. It would be beneficial to clarify the number of connections and the driving distance considered in the test environment and scenario.
Author Response
Comment 1:
It would be valuable to provide a more precise explanation of the interfaces and interactions among the components depicted in Figure 1.
Response 1:
Thank you for this valuable comment. We agree that the original version of the manuscript did not provide sufficient detail regarding the interfaces and interactions among the components depicted in Figure 1. To address this, we have revised Sections 3.1.1 through 3.1.3 to offer more precise explanations. Specifically, the revised text clarifies how the Discovery Manager detects services availability to the Bridge Manager, how the Bridge Manager manages registration, and deletion requests and distributes control commands through dedicated queues, and how the Message Router establishes the bidirectional data path and enforces DDS QoS policies during message forwarding. These revisions clarify the responsibilities of each component and describe their interactions in greater detail, which strengthens the overall explanation of the proposed architecture.
Modifications and additions can be found below.
"Please see the attachment."
Comment 2:
The manuscript dose not explicitly describe the QoS mapping strategies when a mismatch occurs. It would be helpful to elaborate on the exact procedure followed by the QoS Manager in such cases.
Response 2:
Thank you for pointing out this omission. We agree that a more explicit, step-by-step description of the QoS mismatch strategy is essential for the clarity and reproducibility of our work.
Our "fail-safe" procedure is a multi-stage process orchestrated by the Bridge Manager, with critical verification steps delegated to specialized modules like the QoS Manager. The philosophy is to make any configuration mismatch explicit and immediately visible, rather than masking it with a runtime adaptation.
The exact procedure is as follows:
- Discovery & Initial Mapping: The Discovery Manager and Discovery Synchronizer first detect an endpoint and verify that a mapping rule exists for it in the JSON configuration. This result is reported to the Bridge Manager.
- Delegation for QoS Verification: Upon successful mapping, the Bridge Manager delegates the detailed compatibility check. It sends both the discovered QoS requirements and the mandatory QoS profile from the configuration file to the QoS Manager.
- QoS Compatibility Check (QoS Manager): The QoS Manager performs a strict comparison of the two profiles. It then reports a "Success" or "Failure" result back to the Bridge Manager.
- Final Decision (Bridge Manager): If the Bridge Manager receives a "Failure" report from the QoS Manager, it aborts the connection setup process and logs the error. An endpoint is only created if a "Success" report is received.
This clear, multi-stage verification, with the QoS Manager acting as the specialist for compatibility checks, ensures that every active connection adheres to the system's intended design. This detailed procedure is now explicitly described in Section 3.1.3 of the revised manuscript, and its implications are discussed in the Conclusion (Section 5).
Modifications and additions can be found below.
"Please see the attachment."
Comment 3: The authors are encouraged to clarify the scalability of the proposed framework, related to the simulation results presented in Figures 6 and 7.
Response 3:
Thank you for giving us the opportunity to more clearly explain the scalability of our bridge. Your question made us realize that we needed to better present the experimental context that highlights the advantages of our dynamic bridge.
First, we would like to clarify the experimental context for Figures 6 and 7. These graphs show the performance results when our own bridge was configured to operate in a 'Static Mode', emulating the behavior of a conventional static approach. This was done to establish a baseline for a clear comparison against the performance of our bridge's core feature, the 'Dynamic Mode'.
Furthermore, to enhance the practical relevance of our study, we added tests on a representative embedded platform (NVIDIA Jetson AGX Xavier) during this revision process, in addition to the original PC environment. Figures 6 and 7 now show the combined results from both environments.
To clarify the scalability characteristics shown in these figures, Figures 6 and 7 illustrate that when the bridge operates in 'Static Mode', both CPU and memory usage increase in a predictable, linear fashion as the number of connections grows. This behavior is the inherent scalability model of a static architecture, resulting from its need to pre-allocate and maintain resources for all potential connections, regardless of their activity.
We presented this 'Static Mode' scalability model as a baseline to highlight the scalability of our bridge's 'Dynamic Mode'. When our bridge operates in 'Dynamic Mode', it consumes near-zero resources in an idle state regardless of the number of connections, as resources are only allocated when communication becomes active. This provides a fundamentally more efficient scalability model compared to the predictable resource consumption of the static mode.
Reflecting your comment, we have revised the discussion of Figures 6 and 7 in Section 4 (Experiments) of our manuscript. Thank you again for pointing out this important aspect, which has helped us to more clearly convey the core advantages of our research.
Modifications and additions can be found below.
"Please see the attachment."
Response 4:
We thank the reviewer for raising this excellent point about reliability. We agree that metrics like packet loss are critical for evaluating the communication robustness of the system.
The reviewer's question touches upon a key aspect of reliability, which is the system's ability to handle adverse network conditions. However, another fundamental aspect of reliability is the stability of the software components themselves under processing load. An unstable bridge can become a bottleneck and drop or delay messages internally, even on a perfect network.
Therefore, we decided to first investigate this foundational issue: the stability and predictability of the bridge itself under stress. We believe understanding this "processing reliability" is a prerequisite to any meaningful end-to-end communication reliability analysis. During the revision process, we conducted an extensive performance boundary analysis on the target embedded platform (Jetson Xavier NX), with the results now presented as performance heatmaps in the manuscript.
Our key findings from the heatmaps show two distinct performance regions. The first is a large 'stable operational envelope', where the bridge consistently maintains low latency with minimal variance. The second is a clearly defined 'performance cliff'. When approaching system limits with both high connection counts and large payloads, both the mean latency and its standard deviation increase sharply and non-linearly.
This dramatic rise in standard deviation is the critical takeaway, as it signifies the point where the system's behavior becomes erratic, and its performance is no longer predictable or reliable. This analysis provides a quantitative map of the bridge's operational limits, offering a clear guide to ensuring its stability under heavy, real-world loads.
We believe this analysis provides a more fundamental answer regarding the overall reliability of our bridge. We have added these new figures and the corresponding analysis to the Validation section.
Modifications and additions can be found below.
"Please see the attachment."
Comment 5: Since the simulation setup is based on vsomeip rather than a practical ECU, the reviewer raises the question of whether differences or constraints (e.g., memory, CPU, synchronization limitations)
Response 5:
Thank you for raising this critical point regarding the practical applicability of our work. We completely agree that the constraints of a real-world embedded ECU are fundamentally different from a development PC. To address this, we have expanded our performance validation in the revised manuscript.
As we mentioned in Response 3, We added a resource-constrained embedded platform, the NVIDIA Jetson Xavier NX, to our entire performance evaluation in Section 4.1. All latency and resource usage tests, including those presented in Figure 5,6, and 7, were conducted on both the PC and the Jetson platform.
This dual-platform approach allows for a direct comparison and clearly demonstrates how resource constraints on an embedded platform affect latency, CPU usage, and memory consumption. For instance, Figure 7 now shows the increase in CPU and memory usage on the Jetson platform in the emulated static mode, validating the importance of our dynamic approach for embedded systems. We believe these additions strengthen the manuscript by grounding our performance claims in a hardware environment that is relevant to the automotive domain.
Modifications and additions can be found below.
"Please see the attachment."
Comment 6: It would be beneficial to clarify the number of connections and the driving distance considered in the test environment and scenario.
Response 6:
Thank you for this suggestion to improve the clarity of our experimental setup. We have revised Section 4.2.1 to include these details.
For the real-world test, there were two continuous data connections established through the bridge: one for LiDAR data (sensor_msgs/PointCloud2) and one for GPS data (sensor_msgs/NavSatFix).
The driving test was conducted on a predefined route of approximately 200 meters around a campus lake, which included straight sections and a 90-degree curve to ensure the system was tested under realistic driving conditions. This information has been added to the manuscript to provide a clearer picture of the validation scenario.
Additions can be found below.
"Please see the attachment."
Author Response File: Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for AuthorsOverall Assessment
The manuscript presents a novel dynamic bridge architecture to improve interoperability between Adaptive AUTOSAR and ROS2, addressing critical limitations of static approaches (e.g., resource inefficiency and QoS incompatibility). The work is well-structured, experimentally validated, and contributes significantly to automotive software integration. However, several technical aspects require clarification, and methodological gaps need to be addressed to strengthen its robustness and applicability.
Major Comments
- Technical Novelty vs. Related Work (Section 2.5)
- The distinction between the proposed dynamic bridge and prior solutions (e.g., ARISA project, ROS2-AUTOSAR integration frameworks) is insufficiently emphasized. While the dynamic resource/QoS management is highlighted, quantitative comparisons (e.g., latency, memory footprint) against existing static bridges are only partially addressed (Section 4.1).
- Suggestion: Include a table comparing performance metrics (latency, memory, scalability) of the proposed bridge against Iwakami et al. [9], Hong et al. [8], and Cho et al. [20]. Explicitly state how the dynamic approach outperforms these in multi-connection scenarios.
- QoS Management Depth (Section 3.1.3)
- The QoS Manager is described as automatically aligning policies but lacks implementation details. How are conflicts resolved (e.g., if a ROS2 node requests "Reliable" QoS while AUTOSAR uses "Best Effort")? The paper assumes predefined compatibility via the JSON configuration, but real-world mismatches may require negotiation.
- Suggestion: Elaborate on the conflict-resolution mechanism. If policies are strictly predefined, justify this design choice and discuss limitations for dynamic QoS negotiation (e.g., Cakir et al. [19]).
- Hardware Relevance and Scalability (Section 4.1)
- Experiments use a high-spec PC (Intel i7, 32GB RAM), which does not reflect resource-constrained automotive ECUs. Memory savings (21.25 MB at 100 connections) may be insignificant in PCs but critical in embedded systems; however, no embedded-platform validation is provided.
- Suggestion: Add tests on an automotive-grade ECU (e.g., NVIDIA Xavier) to validate resource efficiency in target hardware. Discuss implications for real-world deployment.
- Real-World Test Limitations (Section 4.2)
- The autonomous driving test uses a vsomeip-based mock AUTOSAR application, bypassing commercial AUTOSAR stacks. This overlooks challenges like certification, security, or timing constraints in production environments.
- Suggestion: Acknowledge this limitation and propose future work with industry partners (e.g., using Vector AUTOSAR Adaptive).
- Data Conversion Overhead (Section 4.1)
- Figure 5(b) shows high latency for DDS-to-SOME/IP conversion (attributed to CDR deserialization). This bottleneck is critical for large sensor data (e.g., LiDAR). No optimization strategies (e.g., parallelization, hardware acceleration) are discussed.
- Suggestion: Analyze optimizations (e.g., zero-copy techniques) to reduce conversion latency, especially for >64 KB payloads.
Minor Comments
- Bridge Configuration (Section 3.3)
- The JSON configuration file is practical but non-standardized. How does it align with AUTOSAR ARXML or ROS2 parameter formats?
- Suggestion: Briefly discuss interoperability with industry standards (e.g., ARXML converters).
- ROS2 Message Compatibility (Section 3.3)
- The bridge uses fastddsgen to generate ROS2-compatible types. Does this support all ROS2 message types (e.g., nested structures, actions)?
- Suggestion: Clarify limitations (e.g., support for tf2_msgs or custom types).
- Latency Metrics (Section 4.2.2)
- End-to-end latency (20–30 ms) is acceptable for Autoware’s 100-ms cycle but may not suffice for safety-critical functions (e.g., emergency braking).
- Suggestion: Discuss applicability boundaries and propose optimizations for lower-latency use cases.
- Missing Details in Architecture
- The thread-per-connection model (Section 4.1) ensures scalability but may exhaust threads in large systems. Specify the maximum supported connections.
- The "Discovery Synchronizer" (Section 3.1.1) needs algorithmic details (e.g., how mapping rules resolve conflicts).
Experimental Rigor
- Statistical Validity
- Latency measurements (Figures 5–6, 13–14) lack statistical analysis (e.g., confidence intervals, p-values for comparisons).
- Suggestion: Report standard deviations/confidence intervals for all latency results.
- QoS Policy Testing
- Table 2 lists supported QoS policies, but no experiments validate their impact (e.g., how "Deadline" affects latency under congestion).
- Suggestion: Add tests varying QoS policies (e.g., "Reliable" vs. "Best Effort") to quantify performance trade-offs.
Clarity and Presentation
- Figures 2–4: Use higher-resolution diagrams. Figure 1’s table is cut off.
- Table 4: Label rows/columns clearly (e.g., "Direction: SOME/IP→DDS").
- Abbreviations: Define all acronyms at first use (e.g., CDR in Section 4.1).
Conclusion and Impact
The dynamic bridge architecture effectively solves key interoperability challenges, demonstrating scalability, efficiency, and real-time capability. The work advances SDV development by bridging research (ROS2) and production (AUTOSAR) ecosystems. With revisions addressing the above points, this paper will be a valuable contribution to the field.
Comments on the Quality of English Language-
Clarity: Avoid long sentences (e.g., Section 3.2.1, Step 6). Split into shorter statements.
-
Terminology: Define acronyms at first use (e.g., CDR in Section 4.1).
-
Tense: Use past tense for methods/results (e.g., "was measured" instead of "is measured").
Author Response
Major Comment and Suggestion 1: Technical Novelty vs. Related Work (Section 2.5)
- The distinction between the proposed dynamic bridge and prior solutions (e.g., ARISA project, ROS2-AUTOSAR integration frameworks) is insufficiently emphasized. While the dynamic resource/QoS management is highlighted, quantitative comparisons (e.g., latency, memory footprint) against existing static bridges are only partially addressed (Section 4.1).
- Suggestion: Include a table comparing performance metrics (latency, memory, scalability) of the proposed bridge against Iwakami et al. [9], Hong et al. [8], and Cho et al. [20]. Explicitly state how the dynamic approach outperforms these in multi-connection scenarios.
Major Response 1:
Thank you for your clear suggestion to include a direct comparison to demonstrate the benefits of our work. We fully agree that a strong, quantitative comparison is essential to validate the advantages of our proposed bridge.
In preparing this comparison, we noted that a direct, one-to-one numerical comparison with the cited works [8, 9, 20] (now [9, 10, 21]) could be scientifically misleading due to the vastly different hardware and software environments used in their original experiments.
Therefore, to provide a comparison that is both fair and rigorous, we have addressed your request in two keyways in the revised manuscript:
First, we have added a new table in the revised Section 2.5 (Related Work). This new table focuses on a qualitative comparison of the architectural and functional characteristics. Specifically, it compares key design choices such as the architectural deployment model (i.e., standalone process vs. an integrated ROS2 node), the use of SOME/IP-SD, the direct utilization of DDS Discovery, the resulting cross-domain synchronization capabilities, the presence of a resource reclamation function, and the degree of support for DDS QoS policy configuration.
Second, to better emphasize the quantitative comparison as suggested, we have revised our baseline analysis in Section 4. In this existing experiment, we configure our bridge in a “static-like” mode for a direct comparison on the same hardware. We have improved Figure 7 and its description to more clearly contrast the inefficient, linear scaling of the static-like mode against the near-zero idle consumption of our dynamic mode. We believe this revision now more effectively delivers the clear, quantitative evidence of our architecture’s benefits.
We hope that these revisions effectively address your concerns and have improved the clarity and impact of our manuscript.
Modifications and additions can be found below.
Please see the attachment.
Major Comment and Suggestion 2: QoS Management Depth (Section 3.1.3)
- The QoS Manager is described as automatically aligning policies but lacks implementation details. How are conflicts resolved (e.g., if a ROS2 node requests "Reliable" QoS while AUTOSAR uses "Best Effort")? The paper assumes predefined compatibility via the JSON configuration, but real-world mismatches may require negotiation.
- Suggestion: Elaborate on the conflict-resolution mechanism. If policies are strictly predefined, justify this design choice and discuss limitations for dynamic QoS negotiation (e.g., Cakir et al. [19]).
Major Response 2:
Thank you for this insightful comment and valuable suggestion. You have correctly identified a critical aspect of our architecture that required a more thorough explanation. We agree that a detailed discussion of our design philosophy, its rationale, and its trade-offs is essential for the paper’s clarity.
Our primary contribution in this paper is the resolution of the dynamic resource management problem inherent in static bridges. To maintain a focused research scope on this core issue, we made a conscious design choice to prioritize a predictable and robust communication model. Consequently, our bridge resolves QoS conflicts at the system design stage, not at runtime. The JSON configuration file serves as the single source of truth, and the bridge’s role is to strictly enforce this pre-approved design. If a runtime request does not conform to the configuration, the connection is rejected.
This fail-safe approach was chosen for two key reasons. First, it ensures maximum predictability, which is paramount in automotive systems. Second, as you noted by referencing Cakir et al., AUTOSAR SOME/IP lacks a standard protocol for dynamic QoS negotiation. Developing a proprietary, non-standard negotiation layer would not only fall outside our research focus but could also compromise the interoperability and reliability we aim to provide.
We believe this approach is an engineering trade-off for the problem we set out to solve. To make this clear, we have revised the manuscript to incorporate this detailed rationale and discussion of limitations.
In response to this valuable feedback, we have revised the manuscript to enhance its clarity and detail. The QOS Manager description in Section 3.1.3 has been rewritten to provide an explanation of its strict policy enforcement, including the step-by-step fail-safe procedure for handling mismatches and the rationale for this design choice. Complementing this, the Conclusion in Section 5 has been expanded to discuss the limitations of this static approach and to propose the development of a dynamic QoS negotiation as a key direction for future research.
Modifications and additions can be found below.
Please see the attachment.
Major Comment and Suggestion 3: Hardware Relevance and Scalability (Section 4.1)
- Experiments use a high-spec PC (Intel i7, 32GB RAM), which does not reflect resource-constrained automotive ECUs. Memory savings (21.25 MB at 100 connections) may be insignificant in PCs but critical in embedded systems; however, no embedded-platform validation is provided.
- Suggestion: Add tests on an automotive-grade ECU (e.g., NVIDIA Xavier) to validate resource efficiency in target hardware. Discuss implications for real-world deployment.
Major Response 3:
We are grateful for the reviewer's feedback, which correctly identified the need for validation on a resource-constrained embedded platform to demonstrate the paper's practical relevance. In direct response, we have now incorporated a new validation performed on an NVIDIA Jetson Xavier NX.
The results of this new validation are detailed throughout the revised Section 4.1, where the performance on the embedded ECU is now presented alongside the original PC data for a direct, apples-to-apples comparison.
A key part of this new validation is the analysis presented in Figure 7. It uses an emulated "static-like" mode to establish a baseline and clearly illustrates that the resource wastage inherent to static architectures is far more severe and critical on the Jetson platform. This finding, combined with the performance data on latency and scalability presented in Figures 5 and 6, reinforces the practical necessity of our dynamic approach for real-world automotive ECUs.
We hope that these additions, which provide the requested validation on target hardware, effectively address the reviewer's concerns and strengthen the relevance of our work.
Modifications and additions can be found below.
Please see the attachment.
Major Comment and Suggestion 4: Real-World Test Limitations (Section 4.2)
- The autonomous driving test uses a vsomeip-based mock AUTOSAR application, bypassing commercial AUTOSAR stacks. This overlooks challenges like certification, security, or timing constraints in production environments.
- Suggestion: Acknowledge this limitation and propose future work with industry partners (e.g., using Vector AUTOSAR Adaptive)
Major Response 4:
Thank you for highlighting this important point. We agree that our current evaluation relies on a vsomeip-based mock AUTOSAR application and therefore does not capture the certification, security, and strict timing requirements of production-grade AUTOSAR Adaptive stacks. To address this concern, we have revised Section 4.2 to explicitly acknowledge this limitation. Furthermore, we have added a statement on future work, emphasizing our plan to collaborate with industry partners and extend our evaluation to commercial AUTOSAR Adaptive solutions such as Vector’s platform. This will enable a more comprehensive validation of the proposed bridge under real-world constraints beyond the research prototype stage.
Modifications and additions can be found below.
Please see the attachment.
Major Comment and Suggestion 5: Data Conversion Overhead (Section 4.1)
- Figure 5(b) shows high latency for DDS-to-SOME/IP conversion (attributed to CDR deserialization). This bottleneck is critical for large sensor data (e.g., LiDAR). No optimization strategies (e.g., parallelization, hardware acceleration) are discussed.
- Suggestion: Analyze optimizations (e.g., zero-copy techniques) to reduce conversion latency, especially for >64 KB payloads.
Major Response 5:
Thank you for raising the critical point about data conversion overhead in Section 4.1. We acknowledge that DDS-to-SOME/IP translation introduces a noticeable bottleneck, as highlighted in Figure 5(b). This overhead primarily originates from the intrinsic CDR deserialization process of DDS, which cannot be directly optimized at the bridge application level. We have revised Section 4.1 to explicitly recognize this as a key limitation of DDS-based communication.
Furthermore, in Section 5, we extended our discussion of future work to include possible mitigation strategies at the middleware and hardware level. These include exploring improved DDS implementations with more efficient serialization mechanisms, as well as hardware-assisted techniques for large payloads such as LiDAR data. While these approaches go beyond the scope of the current study, we believe they represent important directions for addressing the conversion bottleneck and complement the bridge-level latency optimization strategies already proposed.
Modifications and additions can be found below.
Please see the attachment.
Minor Comment and Suggestion 1: Bridge Configuration (Section 3.3)
- The JSON configuration file is practical but non-standardized. How does it align with AUTOSAR ARXML or ROS2 parameter formats?
- Suggestion: Briefly discuss interoperability with industry standards (e.g., ARXML converters).
Minor Response 1:
Thank you for this helpful comment. We agree that JSON is a non-standard configuration format. To clarify this point, we have revised Section 3.3 to explicitly state that while JSON was adopted for flexibility and rapid prototyping at the research stage, it is non-standard compared to AUTOSAR ARXML manifests or ROS2 parameter files. We also emphasized that the elements specified in the JSON file conceptually correspond to those defined in ARXML and ROS2 conventions. For example, parameters such as someip.serviceID and someip.instanceID in JSON correspond to SOME/IP entries defined in ARXML, While QoS-related fields conceptually map to standard DDS/ROS2 QoS policies. This addition clarifies the relationship between our research-oriented configuration and existing industry standards.
Modifications and additions can be found below.
Please see the attachment.
Minor Comment and Suggestion 2: ROS2 Message Compatibility (Section 3.3)
- The bridge uses fastddsgen to generate ROS2-compatible types. Does this support all ROS2 message types (e.g., nested structures, actions)?
- Suggestion: Clarify limitations (e.g., support for tf2_msgs or custom types)
Minor Response 2:
Thank you for your valuable suggestion. To address your comment and clarify the scope of ROS2 message type support in our implementation, we have revised the corresponding section as follows:
Please see the attachment.
Minor Comment and Suggestion 3: Latency Metrics (Section 4.2.2)
- End-to-end latency (20–30 ms) is acceptable for Autoware’s 100-ms cycle but may not suffice for safety-critical functions (e.g., emergency braking).
- Suggestion: Discuss applicability boundaries and propose optimizations for lower latency use cases.
Minor Response 3:
We sincerely appreciate your insightful feedback on the latency metrics. We fully agree that while the measured latency of 20-30ms is sufficient for Autoware’s general processing cycle, it is important to explicitly acknowledge its limitations for safety-critical functions such as emergency braking.
To reflect your valuable suggestions, we have revised the manuscript as follows:
First, At the end of Section 4.2.2, we added a statement clarifying that the proposed bridge is well-suited for non-safety-critical functions such as Autoware’s perception and path planning modules, but additional optimization may be required for safety-critical functions (e.g., emergency braking) with strict real-time constraints.
Second, In Section 5, we introduced specific strategies to further reduce latency. These include zero-copy data transmission to minimize memory copy overhead, implementing a priority-based queue within the bridge to accelerate critical messages, and kernel-level optimizations.
Through these revisions, we aim to more clearly define the applicability of our proposed bridge and present concrete directions for future research to extend its use to safety-critical functions. We are grateful once again for your valuable comment, which has significantly improved the manuscript.
Modifications and additions can be found below.
Please see the attachment.
Minor Comment and Suggestion 4: Missing Details in Architecture
- The thread-per-connection model (Section 4.1) ensures scalability but may exhaust threads in large systems. Specify the maximum supported connections.
- The "Discovery Synchronizer" (Section 3.1.1) needs algorithmic details (e.g., how mapping rules resolve conflicts).
Minor Response 4:
- Thank you for this important question. Upon investigating this point, we found that the maximum number of stable connections is not a single, static value but is closely intertwined with another key factor: the message payload size.
To provide a clear and practical answer that captures this important relationship, we conducted a new performance boundary analysis, presenting the results as a heatmap in Figure 8. This heatmap visualizes the operational limits, showing, for instance, that the bridge can reliably support up to 80 concurrent connections with smaller (≤16 KB) payloads, while this threshold decreases to 30-40 connections as the payload size increases to 64 KB.
We thank the reviewer for their insightful question, which prompted us to conduct this deeper analysis and significantly improve the paper.
- We thank the reviewer for their valuable suggestion to enhance clarity on this point. Following the reviewer’s feedback, we have expanded Section 3.1.1 to more explicitly detail the algorithmic process of the Discovery Manager and its inherent conflict resolution mechanism.
In the revised section, we now clarify that the system's deterministic behavior is ensured by its strict adherence to a user-defined configuration file containing all valid mapping rules. We further describe how the Discovery Manager monitors for new services and, upon detection, validates them against these predefined rules. The text now explains that commands to establish a bridge link are issued only when a valid and explicit mapping is found in the configuration. This approach, where no action is taken for services without a corresponding rule, serves as the core of the conflict resolution strategy, as it inherently prevents ambiguities or conflicts from arising from spontaneous service discoveries. We hope these additions make the process much clearer for the reader.
Please see the attachment.
Experimental Rigor Comment and Suggestion 1: Statistical Validity
- Latency measurements (Figures 5–6, 13–14) lack statistical analysis (e.g., confidence intervals, p-values for comparisons).
- Suggestion: Report standard deviations/confidence intervals for all latency results.
Experimental Rigor Response 1:
We thank the reviewer for this important suggestion to enhance the experimental rigor of our study. We fully agree that providing statistical analysis for our latency measurements is crucial for validating the stability and predictability of our proposed bridge.
To address this, we have incorporated statistical metrics throughout the revised manuscript. For the performance analysis in Section 4.1, we have introduced a more comprehensive performance boundary analysis, visualized as a heatmap in the new Figure 8. This analysis is based on a large dataset for statistical significance, and crucially, it includes a dedicated heatmap of the standard deviation of latency.
Furthermore, for the real-world validation results corresponding to Figures 13 and 14, we have added a new accompanying graph that explicitly visualizes the communication jitter, which we have defined and calculated as the standard deviation of latency over recent measurement intervals.
We believe these additions, the standard deviation heatmap and the jitter analysis, directly provide the statistical rigor the reviewer requested and strengthen the validation of our work. We thank the reviewer again for their constructive feedback.
Additions can be found below.
Please see the attachment.
Experimental Rigor Comment and Suggestion 2: QoS Policy Testing
- Table 2 lists supported QoS policies, but no experiments validate their impact (e.g., how "Deadline" affects latency under congestion).
- Suggestion: Add tests varying QoS policies (e.g., "Reliable" vs. "Best Effort") to quantify performance trade-offs.
Experimental Rigor Response 2:
We sincerely thank the reviewer for their insightful feedback. This has given us a valuable opportunity to more clearly articulate the scope and phased approach of our evaluation.
In this study, we determined that it was essential to first validate the efficiency and lightweight nature of the proposed bridge architecture itself. This is because a heavy and inefficient bridge would become a bottleneck for the entire system, regardless of the underlying middleware’s QoS performance. Therefore, as per the reviewer’s other suggestion, our performance evaluation on the target embedded module was focused on metrics that directly measure the bridge’s own overhead, such as CPU utilization, memory usage, and internal processing latency. The results of these experiments demonstrate that our bridge operates efficiently with minimal resource consumption on the target hardware.
With the bridge’s component-level efficiency now demonstrated, we agree that a system-level performance analysis under different QoS policies, as the reviewer suggests, is indeed the natural and important next step.
However, the design and performance validation of the bridge architecture itself is a substantial topic. We were concerned that attempting to cover both an in-depth validation of our bridge architecture and a comprehensive, system-level QoS characterization in a single manuscript would broaden the scope too widely and dilute the paper's message about our contribution, the dynamic bridge architecture.
For these reasons, we concluded that a phased approach to the research was more appropriate, and for this paper, we focused on the first phase of validating the bridge architecture’s efficiency. Consequently, the system-level QoS performance analysis has been planned as a distinct follow-up study. We revised the Conclusions section of our manuscript to clarify the scope of our research and our plans for future work, as follow:
Please see the attachment.
Once again, we sincerely thank the reviewer for their valuable comments, which have helped us to clarify the stages of our research and defined a clear path forward.
Clarity and Presentation Comment 1:
Figures 2–4: Use higher-resolution diagrams. Figure 1’s table is cut off.
Clarity and Presentation Response 1:
Thank you for your feedback on readability. I've revised the figures accordingly. Figures 2-4 were created using vector graphics (SVG), and the label font size has been increased for clearer rendering. For Figure 1, I adjusted the layout and alignment to correct cut off. Below is a breakdown of the changes we’ve made.
Please see the attachment.
Clarity and Presentation Comment 2:
Table 4: Label rows/columns clearly (e.g., "Direction: SOME/IP→DDS").
Clarity and Presentation Response 2:
We thank the reviewer for this helpful suggestion to improve the clarity of our presentation. We agree that more explicit labels in Table 4 would be beneficial for the reader.
We have revised Table 4 as suggested. The rows/columns are now clearly labeled using a more descriptive format (e.g., "Direction: SOME/IP→DDS") to ensure the information is unambiguous and easy to interpret.
Clarity and Presentation Comment 3:
Avoid long sentences (e.g., Section 3.2.1, Step 6). Split into shorter statements.
Clarity and Presentation Response 3:
We thank the reviewer for their valuable feedback on improving the manuscript's readability. We agree that some sentences were overly long, which could hinder clarity.
We have revised the paragraph identified by the reviewer in Section 3.2.1 (Step 6), splitting the longer statements into shorter, more direct sentences to improve clarity.
Response to Comments on the Quality of English Language
Point 1: Define acronyms at first use (e.g., CDR in Section 4.1).
Response 1:
Thank you for pointing this out. We have reviewed the manuscript and added the full definitions for acronyms at their first use to improve clarity. The corrections are as follow:
Please see the attachment.
Point 2: Use past tense for methods/results (e.g., "was measured" instead of "is measured").
Response 2:
We sincerely thank the reviewer for this helpful comment regarding grammatical consistency. We agree that using the past tense is the appropriate convention for describing the work we performed and the results we obtained. To address this and further enhance the overall quality and readability of our manuscript, we have not only corrected the specific issue of verb tense but also had the manuscript professionally edited by MDPI's Author Services.
Author Response File: Author Response.pdf
Reviewer 3 Report
Comments and Suggestions for AuthorsThis manuscript presents a novel dynamic SOME/IP-DDS bridge architecture aimed at improving interoperability between AUTOSAR Adaptive and ROS2. Compared to traditional static bridges, the proposed solution dynamically creates and releases communication entities, and automatically manages QoS settings to ensure compatibility and efficient resource usage. The paper is well-structured and provides both simulation-based performance evaluation and real-world validation on an autonomous robot platform. The experimental results convincingly demonstrate the scalability, low-latency performance, and practical feasibility of the approach. Overall, the topic is relevant to the development of Software-Defined Vehicles and contributes to bridging the gap between academic research (ROS2) and industrial application (AUTOSAR Adaptive).
The experiments were conducted using a high-spec PC environment (Intel i7, 32 GB RAM). How would the proposed bridge perform on resource-constrained embedded automotive ECUs? Could the authors provide additional discussion or estimation regarding computational overhead in such environments?
The bridge automatically manages QoS settings between SOME/IP and DDS. However, in complex autonomous driving scenarios with heterogeneous QoS demands (e.g., safety-critical vs. infotainment data), how does the bridge resolve conflicting QoS requirements? Are there mechanisms to prioritize or negotiate QoS settings dynamically?
The current validation relies on the open-source vsomeip implementation. What challenges do the authors anticipate when integrating the proposed bridge with a commercial Adaptive AUTOSAR stack? Specifically, could vendor-specific extensions or restrictions in ARXML manifest files affect portability and reusability?
the following paper to enhance the paper's relevance and background in terms of methodologies and approaches: https://doi.org/10.1016/j.ress.2025.111092 ; Incorporating these references would not only broaden the scope of the paper but also strengthen its foundation by connecting it to cutting-edge research in related fields.
Author Response
Comment 1:
The experiments were conducted using a high-spec PC environment (Intel i7, 32 GB RAM). How would the proposed bridge perform on resource-constrained embedded automotive ECUs? Could the authors provide additional discussion or estimation regarding computational overhead in such environments?
Response 1:
Thank you for this crucial question. We recognized that our initial validation on a high-spec PC was a significant limitation. To address this directly, we have performed a comprehensive new set of experiments on an embedded platform, the NVIDIA Jetson Xavier NX.
The revised Section 4.1 now includes a full performance analysis on this embedded platform, with results presented alongside the PC data in Figure 5,6, and 7. This dual-platform evaluation provides a direct comparison and quantifies the performance overhead in an embedded environment.
Specifically, our results show that while the bridge remains highly efficient, the latency and resource consumption are, as expected, more pronounced on the Jetson platform (e.g., Figure 5(c) and 5(d)). Furthermore, our analysis of the emulated static mode in Figure 7 clearly demonstrates that the resource wastage of a static approach is far more critical on embedded hardware, reinforcing the necessity of our dynamic architecture. We believe this extensive new validation provides a robust answer to your question and significantly strengthens the paper’s practical relevance.
Modifications and additions can be found below.
Please see the attachment.
Comment 2:
The bridge automatically manages QoS settings between SOME/IP and DDS. However, in complex autonomous driving scenarios with heterogeneous QoS demands (e.g., safety-critical vs. infotainment data), how does the bridge resolve conflicting QoS requirements? Are there mechanisms to prioritize or negotiate QoS settings dynamically?
Response 2:
Thank you for this excellent, practical question. The scenario you described, involving heterogeneous data streams like safety-critical and infotainment data, is an illustration of the challenges our work aims to address and one that merits a detailed explanation.
To answer directly, the current version of our bridge does not perform runtime prioritization or dynamic negotiation of QoS policies. Such critical distinctions between data types are managed at the system architecture and design stage. The system architect is responsible for encoding these priorities into the JSON configuration file—for instance, by mandating a ‘Reliable’ profile for the safety-critical path and allowing a ‘Best Effort’ profile for the infotainment path.
Our bridge then acts as an enforcer of this architectural design. The primary reason for this safety and predictability. In an automotive domain, the silent failure or quality degradation of a safety-critical data link is a significant risk. An automatic QoS downgrade, for example, could lead to the loss of vital sensor data or control commands, which is an unacceptable outcome.
We appreciate your highlighting the need to make this clearer. We have revised the manuscript to explicitly describe this design enforcement role and its importance in safety-related scenarios. The QOS Manager description in Section 3.1.3 has been substantially rewritten to provide an explanation of its strict policy enforcement, including the step-by-step fail-safe procedure for handling mismatches and the rationale for this design choice. Complementing this, the Conclusion in Section 5 has been expanded to discuss the limitations of this static approach and to propose the development of a dynamic QoS negotiation as a key direction for future research.
Modifications and additions can be found below.
Please see the attachment.
Comment 3:
The current validation relies on the open-source vsomeip implementation. What challenges do the authors anticipate when integrating the proposed bridge with a commercial Adaptive AUTOSAR stack? Specifically, could vendor-specific extensions or restrictions in ARXML manifest files affect portability and reusability?
Response 3:
Thank you for highlighting this valuable concern. Our current validation was conducted with the open-source vsomeip implementation, and we recognize that integrating the bridge into a commercial AUTOSAR Adaptive stack may raise additional challenges. In particular , vendor-specific extensions in ARXML manifest files can introduce compatibility issues, as non-standard attributes or toolchain-specific structures may prevent direct mapping to the bridge’s configuration rules. Such variations could also reduce reusability across different vendor’s environments, leading to additional engineering effort for adaptation. We have revised Section 4.2 to explicitly acknowledge these potential limitations and to note that future work will involve collaboration with industry partners to evaluate portability under real ARXML configurations and ensure compliance with production-grade AUTOSAR Adaptive stacks.
Additions can be found below.
Please see the attachment.
Comment 4:
the following paper to enhance the paper's relevance and background in terms of methodologies and approaches: https://doi.org/10.1016/j.ress.2025.111092 ; Incorporating these references would not only broaden the scope of the paper but also strengthen its foundation by connecting it to cutting-edge research in related fields.
Response 4:
Thank you for pointing this out. We agree that incorporating recent advances in related methodologies would strengthen the background of our work. Accordingly, we have cited the paper in the Introduction. This addition broadens the scope of the paper by connecting our research to cutting-edge studies in intelligent transportation systems, thereby reinforcing the relevance and foundation of our work
Additions can be found below.
Please see the attachment.
Author Response File: Author Response.pdf
Round 2
Reviewer 1 Report
Comments and Suggestions for AuthorsThe authors have carefully and thoroughly responded to the reviewer’s comments.
The reviewer does not have further comments.
Author Response
Dear Reviewer,
Thank you once again for your valuable time and constructive evaluation of our manuscript.
We are particularly grateful for your positive assessment, and we are encouraged that you found
our previous revisions to be thorough and comprehensive.
While you noted that no further changes were required, we took your helpful suggestion to
heart and have made a minor revision to the introduction section. We believe this change
enhances the clarity of our contributions. We sincerely appreciate your guidance throughout this
process and thank you for recommending our manuscript for publication.
Author Response File: Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for AuthorsComments:
1. The authors responded to the reviewers' comments, although some minimal issues remain to be addressed, such as:
a. Include a numerical comparison with at least one of the referenced works (e.g., Hong et al. or Iwakami et al.) in terms of latency, memory, or scalability.
b. Elaborate on how scenarios with conflicting QoS policies would be handled at runtime, or more explicitly justify the decision not to do so.
c. Include a discussion of how these results would translate to a real automotive ECU (e.g., temperature considerations, certification, etc.).
d. Specify which certification or security aspects might affect the bridge implementation.
e. Include a brief qualitative analysis or references on how these techniques could be implemented in the future.
f. Review all figures to ensure readability and completeness.
g. Explicit tests of QoS policies (e.g., reliability, deadlines) are missing.
h. Include at least one experiment that compares performance under different QoS policies (e.g., Best Effort vs. Reliable).
Author Response
Comment a:
Include a numerical comparison with at least one of the referenced works (e.g., Hong et al.
or Iwakami et al.) in terms of latency, memory, or scalability.
Response a:
We thank the reviewer for re-emphasizing the importance of a clear comparison. In our
previous revision, we introduced Table 1 to provide a qualitative architectural analysis, as
we concluded that a direct numerical comparison would be challenging to interpret fairly
across different experimental settings.
The purpose of the existing comparison in Table 1 is to ground the analysis in the
architectural differences, which can be assessed objectively, rather than in performance
metrics that could be difficult to fairly interpret across different environments. We hope this
table effectively communicates the fundamental design advantages of our dynamic
approach.
Comment b:
Elaborate on how scenarios with conflicting QoS policies would be handled at runtime, or
more explicitly justify the decision not to do so.
Response b:
We thank the reviewer for this important question, as it highlighted that our manuscript
did not sufficiently explain this key design feature. To address this, we have now added a
clarifying sentence to the 'QoS Manager' subsection.
As explained in the revised text, our architecture is intentionally designed to prevent
runtime QoS conflicts by enforcing a strict, pre-configured rule set. This pre-emptive
validation ensures that any request with incompatible QoS policies is rejected before a
communication path is ever established, guaranteeing predictable and robust system
behavior. We believe this clarification improves the description of our system's reliability,
and we thank the reviewer for prompting this improvement.
Additions can be found below.
[3. Dynamic SOME/IP-DDS Bridge Architecture and Implementation: Line 285-295]
QoS Manager: Serves as the authority for QoS policy enforcement. It validates discovered
requirements against the mandatory QoS profile defined in the JSON configuration file. If
the profiles are fully compatible, the QoS Manager authorizes the creation of the
corresponding DDS Publisher or Subscriber; otherwise, it reports a failure to the Bridge
Manager, which aborts the connection setup and records the error. This pre-emptive
validation ensures that runtime QoS policy conflicts are prevented by design, as incompatible
communication paths are never established in the first place. This fail-safe mechanism prioritizes
predictability and safety over runtime adaptation, which is essential in automotive systems.
The strict enforcement of QoS policy also reflects the fact that AUTOSAR SOME/IP does not
provide a standardized protocol for dynamic QoS negotiation.
Comment c, d, e:
c. Include a discussion of how these results would translate to a real automotive ECU (e.g.,
temperature considerations, certification, etc.).
d. Specify which certification or security aspects might affect the bridge implementation.
e. Include a brief qualitative analysis or references on how these techniques could be
implemented in the future.
Response c, d, e:
We thank the reviewer for these insightful comments, and we completely agree that these
are critical considerations for production deployment. We also identified these as important
limitations of the current study's scope.
Therefore, as stated in our Conclusion section, we have designated these topics as key
directions for our future research. For the reviewer's convenience, the relevant text from our
manuscript is:
[5. Conclusions: Line 776-783]
The current validation was conducted using the open-source vsomeip stack, and the
experiments on the target embedded ECU were focused on verifying the bridge’s own
efficiency (in terms of CPU, memory, and internal latency) as a foundational first step.
Building on this, future work should pursue a more comprehensive, system-level
evaluation by integrating the bridge with a commercial AUTOSAR Adaptive platform
stack. Such a study would allow for analysis under various QoS policies and in an
environment that includes production-grade variables like certification requirements,
security mechanisms, and strict timing constraints, which were not fully considered here.
We hope this passage clarifies that we share the reviewer's perspective on the importance
of these topics. By defining them as the clear direction for our future work, we aimed to set a
transparent and realistic scope for the current manuscript.
Comment f:
Review all figures to ensure readability and completeness.
Response f:
Thank you for the feedback on the figures. We have since performed a careful re-evaluation
of all figures and captions, with the reviewer's comments on readability and completeness in
mind.
Comment g, h:
g. Explicit tests of QoS policies (e.g., reliability, deadlines) are missing.
h. Include at least one experiment that compares performance under different QoS policies
(e.g., Best Effort vs. Reliable).
Response g, h:
We fully agree with the reviewer on the importance of dedicated QoS experiments and
have carefully considered the suggestion to add at least one comparison.
However, a single experiment conducted under ideal network conditions might not be fully
representative of a policy's real-world impact. The key performance trade-offs we aim to
investigate typically emerge only under specific network stress conditions, such as packet
loss or congestion. We are concerned that presenting a single data point without this crucial
context could be inconclusive and might not fully capture the behavior the reviewer is
interested in.
For this reason, our current study focused on first establishing the bridge architecture’s
fundamental stability as a first step.
Therefore, we believe a comprehensive study, as outlined in our future work, is the most
appropriate way to investigate this topic with the necessary depth. As noted in the third
paragraph of our Conclusions section, our proposed future work is a study that would allow
for:
[5. Conclusions: Line 776-783]
The current validation was conducted using the open-source vsomeip stack, and the
experiments on the target embedded ECU were focused on verifying the bridge’s own
efficiency (in terms of CPU, memory, and internal latency) as a foundational first step.
Building on this, future work should pursue a more comprehensive, system-level
evaluation by integrating the bridge with a commercial AUTOSAR Adaptive platform
stack. Such a study would allow for analysis under various QoS policies and in an
environment that includes production-grade variables like certification requirements,
security mechanisms, and strict timing constraints, which were not fully considered here
We hope this clarifies our rationale and demonstrates our commitment to addressing this
topic thoroughly in our future research.
Author Response File: Author Response.pdf