Next Article in Journal
Multimodal Emotion Recognition via the Fusion of Mamba and Liquid Neural Networks with Cross-Modal Alignment
Previous Article in Journal
An Analysis of Partitioned Convolutional Model for Vehicle Re-Identification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Dynamic Bridge Architecture for Efficient Interoperability Between AUTOSAR Adaptive and ROS2

by
Suhong Kim
,
Hyeongju Choi
,
Suhaeng Lee
,
Minseo Kim
,
Hyunseo Shin
and
Changjoo Moon
*
Department of Smart Vehicle Engineering, Konkuk University, Seoul 05029, Republic of Korea
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(18), 3635; https://doi.org/10.3390/electronics14183635
Submission received: 9 August 2025 / Revised: 9 September 2025 / Accepted: 12 September 2025 / Published: 14 September 2025
(This article belongs to the Special Issue Advances in Autonomous Vehicular Networks)

Abstract

The automotive industry is undergoing a transition toward Software-Defined Vehicles (SDVs), necessitating the integration of AUTOSAR Adaptive, a standard for vehicle control, with ROS2, a platform for autonomous driving research. However, current static bridge approaches present notable limitations, chiefly regarding unnecessary resource consumption and compatibility issues with Quality of Service (QoS). To tackle these challenges, in this paper, we put forward a dynamic bridge architecture consisting of three components: a Discovery Manager, a Bridge Manager, and a Message Router. The proposed dynamic SOME/IP-DDS bridge dynamically detects service discovery events from the SOME/IP and DDS domains in real time, allowing for the creation and destruction of communication entities as needed. Additionally, it automatically manages QoS settings to ensure that they remain compatible. The experimental results indicate that this architecture maintains a stable latency even with a growing number of connections, demonstrating high scalability while also reducing memory usage during idle periods compared to static methods. Moreover, real-world assessments using an autonomous driving robot confirm its real-time applicability by reliably relaying sensor data to Autoware with minimal end-to-end latency. This research contributes to expediting the integration of autonomous driving exploration and production vehicle platforms by offering a more efficient and robust interoperability solution.

1. Introduction

The automotive sector is swiftly evolving toward the Software-Defined Vehicle (SDV) era, which is a paradigm focused on software-centric development [1]. This transition is accelerating the move from a decentralized structure composed of numerous Electronic Control Units to a centralized model where a limited number of high-performance computers oversee the entire vehicle, making the software platform’s role increasingly crucial [2]. Additionally, the shift toward SDVs is fostering an environment where vehicle functions and services can be delivered flexibly and personalized through Over-The-Air (OTA) software updates.
In contrast to the conventional static and limited AUTOSAR Classic, AUTOSAR Adaptive offers a dynamic environment based on a Service-Oriented Architecture (SOA), facilitating software updates and personalized service delivery via OTA technology [3,4]. AUTOSAR Adaptive is becoming a standard technology, especially for enabling seamless communication among various in-vehicle applications and for domains requiring real-time performance, such as advanced driver assistance systems and autonomous driving systems. Simultaneously, the Robot Operating System 2 (ROS2), a software platform for robotics, is extensively used in autonomous driving applications that necessitate high-performance computing [5]. ROS2 employs a data-centric design and leverages the Data Distribution Service (DDS) as its foundational communication middleware, which ensures real-time performance and reliability. Offering a diverse open-source ecosystem for autonomous driving, ROS2 is widely utilized for research and prototype development. Notably, open-source frameworks like Autoware exemplify integrated solutions for autonomous driving based on ROS2 [6].
Furthermore, there is a significant amount of research based on ROS2 actively being conducted within academia [7]. Hence, AUTOSAR Adaptive, which is geared toward vehicle mass production and safety standards, and ROS2, known for its strengths in rapid research, development, and prototyping, have emerged as critical platforms within their respective fields. Alongside middleware interoperability studies, other recent works have investigated physics-informed deep learning approaches to enhance robustness and safety in intelligent transportation systems [8]. Therefore, establishing interoperability that harnesses the strengths of both platforms is vital for the efficient application and commercialization of autonomous driving research in production vehicles. Such interoperability is important for efficiently transitioning research prototypes from the ROS2 ecosystem into the safety-critical, production-oriented AUTOSAR environment for effective in-vehicle validation.
Recent research has made several attempts to enhance interoperability between the two platforms. However, the integration methods suggested in the current literature for ROS2 and AUTOSAR Adaptive mainly depend on static ROS2 bridge nodes. This approach presents a notable limitation: it necessitates the ongoing maintenance of Publisher and Subscriber entities, even in the absence of active communication [9,10]. Consequently, the ROS2 bridge node continuously keeps the associated Publisher and Subscriber entities active, regardless of whether a service related to a topic in the AUTOSAR Adaptive SOME/IP domain is in use.
DDS, which serves as the communication middleware for ROS2, offers a dynamic discovery process that automatically identifies nodes on the network [11]. However, ROS2 simplifies the intricate functionalities of DDS through a high-level API, which inhibits bridges developed as standard ROS2 nodes from effectively taking advantage of this dynamic discovery capability. Thus, existing research has resorted to a static method that necessitates the pre-creation of all required Publisher and Subscriber entities [9,10], which is limited by its inefficiency in resource utilization. According to its data-centric design, DDS must manage metadata for each communication entity engaged in the network. A static bridge pre-activates multiple Publishers and Subscribers, which remain in an active state even during periods of inactivity. As a result, the memory needed to maintain their status information increases alongside the number of connections, potentially leading to considerable memory waste in resource-constrained automotive embedded systems [12].
DDS offers Quality of Service (QoS) policy settings that can effectively regulate communication performance, and since ROS2 is built on DDS, it also provides capabilities for these QoS settings [12,13]. Specifically, in the development of autonomous driving systems based on ROS2, communication performance can be optimized by appropriately adjusting QoS settings to match the characteristics of the messages. However, aligning QoS policy settings requires compatibility between the entities communicating, as both the Publisher and Subscriber must possess identical or mutually compatible QoS settings for smooth communication. In a worst-case scenario, mismatched QoS settings could result in data losses or a complete communication breakdown.
This paper introduces a dynamic SOME/IP-DDS bridge architecture to tackle the issue of resource wastage induced by static entity creation and the challenges arising from QoS policy incompatibility. The presented bridge aims to improve system efficiency and communication reliability by dynamically establishing and terminating communication entities based on their utilization and by automatically regulating QoS policy settings. To achieve this, this paper outlines an architecture comprising three essential modules: a Discovery Manager, a Bridge Manager, and a Message Router. The proposed bridge detects discovery events from both domains in real time (Discovery Manager), oversees the lifecycle of communication paths following rules defined in a bridge configuration file (Bridge Manager), and applies compatible QoS policies to convert and relay data (Message Router).
This paper is structured as follows: Section 2 presents relevant works and foundational knowledge. Section 3 details the proposed dynamic bridge architecture, including its components, operational workflow, and implementation specifics. Section 4 assesses the performance of the implemented dynamic bridge and confirms its capability for seamless data exchange within a real autonomous driving context. Finally, Section 5 concludes the paper and discusses potential avenues for future research.

2. Background and Related Works

This section outlines the background and related research pertinent to this paper. Section 2.1 and Section 2.2 describe the features of ROS2 and the AUTOSAR Adaptive platform, respectively, while Section 2.3 and Section 2.4 delve into DDS and SOME/IP, which represent the communication middleware for both platforms. Lastly, Section 2.5 discusses some studies related to this topic.

2.1. ROS2 (Robot Operating System 2)

ROS2 is an open-source middleware platform tailored for the development of robotic software and has been utilized in numerous research and development initiatives focused on autonomous driving. Built on a data-centric design, ROS2 employs DDS as its default communication middleware through an abstraction layer known as the RMW (Ros Middleware Interface), thereby delivering high reliability and real-time communication performance. ROS2 incorporates a subset of the QoS policy settings available in DDS [13].

2.2. AUTOSAR Adaptive Platform

The AUTOSAR Adaptive platform is a standardized framework developed for the creation of vehicle software. In contrast to the traditional AUTOSAR Classic platform, it offers a dynamic and adaptable environment grounded in a Service-Oriented Architecture (SOA) [3,4]. This facilitates real-time software modifications, such as Over-The-Air (OTA) updates, along with the ability to deliver customized services. Communication standards specified by the AUTOSAR Adaptive platform include SOME/IP and DDS, with SOME/IP being primarily favored for implementation [14].

2.3. DDS (Data Distribution Service)

DDS is a middleware standard focused on data-centric communication for real-time distributed systems [15]. It facilitates efficient data sharing among nodes by categorizing data into topics and primarily utilizes UDP-based communication to guarantee quick and effective data transfer. Additionally, DDS features an integrated discovery mechanism that enables nodes in the network to identify each other and exchange configuration details dynamically. A significant advantage of DDS is its support for QoS policies, allowing it to fulfill various real-time requirements by adjusting parameters like reliability, durability, and deadline [12].

2.4. SOME/IP (Scalable Service-Oriented MiddlewarE over IP)

SOME/IP is a lightweight middleware protocol specifically crafted for service-oriented communication within vehicle environments [16]. This protocol promotes efficient interactions between in-vehicle systems that are based on SOA. SOME/IP accommodates both UDP and TCP protocols and incorporates a Service Discovery mechanism (SOME/IP-SD) for discovering services [17]. SOME/IP-SD allows providers and consumers within the network to recognize and manage each other’s presence and available services, thus enabling flexible deployment and management of services.

2.5. Related Works

Recent investigations have explored the integration of ROS2 with the AUTOSAR Adaptive platform in the context of developing autonomous vehicles. J. Henle et al. conducted a comprehensive comparative study analyzing the communication, execution management, and security features of both platforms to assess their compatibility with future automotive architectures [18]. Their findings indicated that while the AUTOSAR Adaptive platform adheres more closely to automotive industry standards and security requirements, ROS’s strengths lie in its flexibility and rapid development capabilities, owing to its open-source ecosystem. This analysis underscores the need for integrating both platforms to utilize their strengths effectively.
Preliminary studies that propose actual integration architectures have largely concentrated on static bridging methods. The ARISA project, put forth by D. Hong et al., validated the interoperability between Autoware (a ROS2-based autonomous driving framework) and an AUTOSAR Adaptive simulator, successfully illustrating that data exchange is feasible through a static bridge node designed around the SOME/IP protocol [9]. Similarly, R. Iwakami et al. suggested a collaboration framework that functions as a ROS2 node bridge, primarily enhancing the setup convenience of Franca Interface Definition Language configuration files for various message types [10]. Alexandru Ioana et al. introduced a multi-protocol gateway that connects SOME/IP, DDS, and eCAL to investigate its suitability for Vehicle-to-Everything (V2X) scenarios [19]. These investigations confirmed the practicality of bridging multiple communication technologies. However, all of these studies relied on predetermined configurations, leading to inefficiencies in runtime resource management and a lack of mechanisms to ensure QoS compatibility.
Meanwhile, other studies have highlighted the necessity of dynamic QoS management. M. Çakır et al. noted that while dynamic QoS management is vital for next-generation SOA-based vehicular networks, the existing SOME/IP lacks a negotiation mechanism [20]. They proposed a dynamic QoS negotiation protocol aiming to address this gap, ensuring that communication quality aligns with service-specific demands. Although their approach differs, this study can be regarded as a foundational work that supports the goal of this study: to automate the management of DDS QoS policies, ensuring that communication quality and compatibility meet the requirements of future automotive networks.
An alternative approach that has been proposed involves enhancing the AUTOSAR Adaptive platform framework itself rather than relying on a separate bridging mechanism. Y. Cho et al. introduced an architecture that integrates a ROS network binding directly into the communication management module ara::com of the AUTOSAR Adaptive platform [21]. This strategy enables an AUTOSAR Adaptive Application to generate a ROS2 node internally, allowing it to publish data to the ROS2 network without the need for an external relay process. While this method promotes the close integration of both platforms, it fundamentally compromises one of AUTOSAR’s core principles: platform portability. Altering a standard module results in the software being heavily dependent on a non-standard, custom AUTOSAR platform explicitly developed for this study, thus diminishing its reusability across other commercial AUTOSAR platforms.
As analyzed above, prior studies have performed important groundwork by demonstrating the feasibility of integrating the two platforms through various approaches. However, because most of these rely on static bridging, they fall short of addressing runtime issues such as dynamic resource management and the compatibility of QoS policies. To make these distinctions explicit, Table 1 compares some representative approaches regarding their discovery mechanisms, resource management, and DDS QoS policy handling. As the table indicates, the proposed dynamic bridge overcomes these limitations by providing cross-domain discovery synchronization, proactive resource management, and strict DDS QoS policy enforcement.

3. Dynamic SOME/IP-DDS Bridge Architecture and Implementation

This section outlines the architecture, components, and operational principles of the proposed dynamic SOME/IP-DDS bridge. Section 3.1 elaborates on the overall architectural framework and its components, Section 3.2 provides details on the workflow, and Section 3.3 discusses the implementation method.

3.1. Dynamic SOME/IP-DDS Bridge Architecture

The bridge introduced in this paper comprises three fundamental modules: a Discovery Manager, a Bridge Manager, and a Message Router. Each module has a distinct function and interacts seamlessly to facilitate data exchange between SOME/IP and DDS. The overall architecture is illustrated in Figure 1.

3.1.1. Discovery Manager

The Discovery Manager is tasked with the dynamic detection and identification of services and communication participants across both communication domains. To accomplish this, it includes three sub-modules: a SOME/IP Handler, a DDS Handler, and a Discovery Synchronizer.
  • SOME/IP Handler: This component detects events like “Offer Service” and “Service Request” through the SOME/IP-SD protocol to gather information about currently available SOME/IP services.
  • DDS Handler: This component monitors the Global Data Space of DDS to identify active DDS participants (communication entities).
  • Discovery Synchronizer: Each handler sends its detected results to the Discovery Synchronizer, which cross-references the identified SOME/IP services and DDS participant data based on mapping rules specified in a pre-configured bridge configuration file.
During operation, the Discovery Manager monitors the availability of services in both DDS and SOME/IP domains and manages the state of bridge links. It references the bridge configuration file, which specifies the mapping rules between topics and services together and message types. When middleware callbacks indicate the presence of a new DDS Publisher/Subscriber or a SOME/IP Server/Client, the Discovery Manager validates the mapping against the configuration and issues control commands such as REGISTER, CREATE, or DELETE to the Bridge Manager through the command queue. When services or participants are no longer detected, deletion commands are transmitted. These interactions ensure that the bridge adapts dynamically to service availability while adhering to the predefined configuration, thereby maintaining consistent synchronization between the two domains.

3.1.2. Bridge Manager

The Bridge Manager serves as the central management module, overseeing the overall operation of the bridge. It interprets requests from the Discovery Manager and manages the sub-modules of the Message Router to establish or terminate communication paths as needed.
  • Handling Registration Requests: When a registration request is received from the Discovery Manager, the Bridge Manager first sets up an internal message queue for data exchange between the two platforms. It then instructs the Message Router to establish the required SOME/IP Endpoint (Server/Client) and DDS Endpoint (Publisher/Subscriber) for communication.
  • Handling Deletion Requests: Upon receiving a deletion request, the Bridge Manager terminates the Endpoint objects associated with the specific communication and releases all related resources, such as the allocated message queue, to enhance system efficiency.
The Bridge Manager additionally governs the lifecycle of all bridge components through dedicated command queues. During initialization, it issues commands such as INIT and STOP to ensure that all modules start from a consistent state. During its runtime, it orchestrates the activation and termination of endpoints based on discovery results and mapping rules, while delegating QoS validation and provisioning to the QoS Manager. By concentrating control responsibilities in this manner, the Bridge Manager provides a predictable management layer that ensures the efficient coordination of communication paths, allowing the Message Router to remain focused on data forwarding.

3.1.3. Message Router

The Message Router is responsible for the actual data translation and bidirectional communication relay, directed by the Bridge Manager. It consists of communication endpoints, a Data Converter, and a QoS Manager as sub-modules.
  • SOME/IP Server/Client: Acts as the communication endpoint for the SOME/IP domain. It is initialized upon request from the Bridge Manager to transmit and receive messages, facilitating data exchange with the DDS side via the internal message queue.
  • DDS Publisher/Subscriber: Functions as the communication endpoint for the DDS domain. It is created based on the settings of the QoS Manager and exchanges data with the SOME/IP side through the internal message queue.
  • Data Converter: This component converts the byte stream data obtained from SOME/IP messages into the message format utilized by ROS2 (or DDS) and can also perform the reverse conversion.
  • QoS Manager: Serves as the authority for QoS policy enforcement. It validates discovered requirements against the mandatory QoS profile defined in the JSON configuration file. If the profiles are fully compatible, the QoS Manager authorizes the creation of the corresponding DDS Publisher or Subscriber; otherwise, it reports a failure to the Bridge Manager, which aborts the connection setup and records the error. This pre-emptive validation ensures that runtime QoS policy conflicts are prevented by design, as incompatible communication paths are never established in the first place. This fail-safe mechanism prioritizes predictability and safety over runtime adaptation, which is essential in automotive systems. The strict enforcement of QoS policy also reflects the fact that AUTOSAR SOME/IP does not provide a standardized protocol for dynamic QoS negotiation.
The Message Router establishes the bidirectional data path between DDS and SOME/IP through symmetric communication endpoints on both sides. Incoming messages are normalized into an internal payload and forwarded according to the mapping rules defined by the Bridge Manager and derived from the JSON configuration file. In the DDS-to-SOME/IP direction, subscribed DDS samples are enqueued and delivered by the SOME/IP Server to its subscribers. In the reverse direction, SOME/IP events or method responses are published to the mapped DDS topic. On the DDS side, QoS policies such as reliability, history, and durability are applied based on the profiles specified in the configuration file, while on the SOME/IP side, message delivery follows subscription and event-group mechanisms. Through these coordinated operations, the Message Router ensures reliable and consistent message transfer across the two domains.

3.2. Dynamic SOME/IP-DDS Bridge Workflow

The data exchange between an AUTOSAR Adaptive Application (referred to as the Adaptive Application) and a ROS2 node encompasses two scenarios: data flowing from the Adaptive Application to the ROS2 node and vice versa. This section outlines the operational workflow of the bridge for these two directions of data transmission, including the workflow for situations where data transmission encounters interruptions. Although SOME/IP accommodates various communication patterns, such as Event, Method, and Field, this section will focus on the bridge’s workflow based on the Method-based Request/Response model.

3.2.1. Adaptive Application to ROS2 Node

Figure 2 illustrates the operational workflow of the dynamic SOME/IP-DDS bridge when transferring data from an Adaptive Application to a ROS2 node. The bridge’s connection procedure can be initiated from two different points. The first point occurs when the Server of the Adaptive Application generates an “Offer Service” event to announce its availability, while the second point is triggered when a Subscriber for a specific topic is activated within a ROS2 node. The bridge continuously monitors events from both platforms and sequentially executes the following procedures to establish a connection whenever either event is recognized.
  • Event Detection and Waiting: The bridge waits to receive an “Offer Service” event from SOME/IP-SD and a “Subscriber registration” event from DDS Discovery. The matching commences begins asynchronously, even if only one of the two events is detected.
  • Service Matching: The bridge checks for the counterpart (Service or Subscriber) needed for interconnection based on the detected event. If the necessary counterpart is not yet established—such as when only an “Offer Service” from the Adaptive Application is detected—it waits for the registration of the corresponding ROS2 node’s Subscriber. Conversely, if only the Subscriber from the ROS2 node is recognized, the bridge promptly multicasts a “Find Service” message via SOME/IP-SD to locate the service and awaits a response. Once both the Service and Subscriber are confirmed and successfully matched, the process moves on to setting up the data path.
  • Message Queue Creation: Utilizing the matched information, an internal message queue is established to facilitate data transmission between the two endpoints. This queue acts as a buffer for asynchronous data exchange between the two communication protocols.
  • Communication Endpoint (Client/Publisher) Creation: Endpoints suitable for each communication protocol are generated for data exchange. A Client for SOME/IP communication and a Publisher for DDS communication are created. Importantly, the DDS Publisher is configured to align with the QoS policy required by the ROS2 node’s Subscriber to ensure high-quality data transmission.
  • Service Request: The created SOME/IP Client sends a “Service Request” to the Server of the Adaptive Application. This request initiates a session through which actual data can be transmitted.
  • Message Subscribe, Convert, and Publish: In response to the “Service Request”, the bridge receives the SOME/IP message sent by the Adaptive Application’s Server. It subsequently converts this message into the ROS2 message type required by the ROS2 node’s Subscriber. Finally, the bridge publishes the converted ROS2 message to the appropriate topic using the previously established DDS Publisher, adhering to the defined QoS policy settings and ensuring reliable delivery to the ROS2 node’s Subscriber.
For unidirectional data streams, like Event communication, the “Service Request” and “Send Response” steps in the workflow depicted in Figure 2 are omitted. Instead, a simpler mechanism is employed, where data paths are established via a “Subscribe Eventgroup” upon the initial connection, leading to continuous data reception. For Field communication, notifications of value changes follow a similar process to that for the Event model. In contrast, Getter/Setter operations for reading or writing values adhere to the Method-based Request/Response model.

3.2.2. ROS2 Node to Adaptive Application

Figure 3 depicts the operational workflow of the dynamic SOME/IP-DDS bridge when transmitting data from a ROS2 node to an Adaptive Application. The connection procedure in this direction can be initiated under two circumstances. The first occurs when a ROS2 node activates a Publisher that is responsible for publishing messages, while the second case arises when a Client within the Adaptive Application initiates a “Find Service” event to locate a particular service. The bridge continuously monitors events from both sides and initiates the connection setup procedure as soon as it detects either event.
  • Event Detection and Waiting: The bridge persistently monitors for a “Publisher registration” event broadcast via DDS Discovery and a “Find Service” event sent through SOME/IP-SD. The matching process commences asynchronously, even if only one of the two events is detected.
  • Service Matching: Upon detecting an event, the bridge verifies the existence of its counterpart (either Publisher or Client) for interconnection. If the needed counterpart is not present—for example, if only a “Find Service” event from the Adaptive Application is detected—the bridge will wait for the corresponding ROS2 node’s Publisher to register. Conversely, if only the ROS2 node’s Publisher is detected, the bridge immediately multicasts an “Offer Service” message via SOME/IP-SD and awaits a “Service Request.” Once both the Client and Publisher are successfully matched, the process advances to the subsequent step of establishing the data path.
  • Message Queue Creation: An internal message queue is established based on the matched information to facilitate data transmission between the two endpoints. This queue acts as a buffer for asynchronous data transfers between the two communication protocols.
  • Communication Endpoint Creation: Appropriate endpoints for each communication protocol are created to enable data exchange. A Server is established for SOME/IP communication, while a Subscriber is set up for DDS communication. Importantly, the DDS Subscriber is configured to align with the QoS policy of the ROS2 node’s Publisher to ensure high-quality data subscription.
  • Offer Service and Service Request: The created SOME/IP Server transmits an “Offer Service” event to the Client within the Adaptive Application, signaling the availability of the service. Upon receiving this, the Client of the Adaptive Application sends a “Service Request” to the bridge’s Server to initiate data reception.
  • Message Subscribe, Convert, and Respond: The bridge’s DDS Subscriber subscribes to messages published by the ROS2 node’s Publisher and transforms the received ROS2 messages into a SOME/IP data format that the Client of the Adaptive Application can comprehend. The converted data is then delivered to the Client via the SOME/IP Server.
For one-way data streams, such as Event communication, the “Service Request” and “Send Response” stages depicted in Figure 3 are omitted. Instead, the process operates in a more streamlined manner, where the data path is established through a “Subscribe Eventgroup” upon initial connection, allowing for continuous data reception thereafter. For Field communication, notifications of value changes follow the same workflow as the Event model, while Getter/Setter operations for value reading or writing correspond to the Method-based Request/Response model.

3.2.3. Remove Connection

Figure 4 depicts the process of resource release that occurs when an established bridge connection is terminated. The disconnection sequence is initiated when the bridge identifies a loss of connection at either endpoint of communication (SOME/IP or DDS). This situation includes instances such as a SOME/IP Server stopping its service offer (Stop Offer Service), a Client ending its session, or a DDS Publisher or Subscriber disconnecting. After confirming a disconnection, the bridge undertakes the following cleanup procedures in reverse order to free up the resources allocated during the setting up of the data path.
  • Remove Communication Endpoints: Upon confirmation of disconnection, the first step is to eliminate the communication endpoints utilized for that data path. This involves removing the SOME/IP Server or Client and the DDS Publisher or Subscriber, which were created based on the direction of data flow.
  • Remove Message Queue: Following the removal of both endpoints, the internal message queue that acted as a buffer between them is deleted from the memory, and its resources are released.
  • Return to Waiting State: Once the resource release process is finalized, the bridge transitions back to a waiting state, ready to receive new connection events.

3.3. Dynamic SOME/IP-DDS Bridge Implementation

The standard API level of ROS2 does not offer the functionality to access or manage the DDS discovery process directly. Therefore, to thoroughly implement the dynamic detection and creation/deletion features of the proposed bridge in this study, it was built as a standalone application rather than as an ROS2 node. This bridge operates as an independent process not linked to any particular platform. It employs the vsomeip library, a SOME/IP implementation provided by COVESA, as its communication stack for SOME/IP, and eProsima Fast DDS, which serves as the default RMW implementation in ROS2, for its DDS communication stack. The use of Fast DDS ensures compatibility with the ROS2 communication environment, eliminating the need for additional data conversion steps, since it utilizes the same DDS transport layer.
Moreover, to ensure that the bridge functions within the DDS domain and can communicate directly with ROS2 nodes, it must support ROS2 message types; to address this requirement, the fastddsgen utility, a code generation tool included with eProsima’s Fast DDS, is utilized. Initially, an IDL (Interface Definition Language) file is created to align with the format of the ROS2 message (.msg) intended for communication, and subsequently, fastddsgen is employed to automatically generate C++ data type header files and related source code from this IDL file for use in the DDS context. The resulting data types generated through this process are entirely compatible with the ROS2 message type, enabling seamless communication between the bridge and ROS2 nodes without necessitating a data type conversion process. Although fastddsgen is theoretically capable of converting all ROS2 message types to IDL, this study focuses on implementing and validating only the standard ROS2 messages, specifically std_msgs and sensor_msgs, and the types they internally use, such as geometry_msgs and builtin_interfaces.
In order for the bridge to effectively connect the two communication environments, it is crucial to have configuration information that accurately outlines the mapping of services and communication endpoints for both platforms. While the AUTOSAR Adaptive platform utilizes ARXML manifest files as the standard for system design, the AUTOSAR Classic platform prohibits the combination of SOME/IP and DDS protocol settings within a single service, making it infeasible to establish a direct mapping relationship through ARXML [22]. Additionally, the ROS2 platform lacks a standardized approach for establishing mapping relationships with external middleware.
To mitigate these challenges, this paper proposes implementing the bridge with a configuration file in JSON format that delineates its operational rules. During its runtime, the bridge reads this JSON file to gather all necessary mapping information required for effective communication relay. Table 2 presents the roles of each item in the bridge configuration file. It should be noted, however, that the JSON format was adopted primarily for flexibility and rapid prototyping at the research stage, and it is non-standard compared to AUTOSAR ARXML manifests or ROS2 parameter files. Nonetheless, the elements specified in the JSON file conceptually correspond to those defined in ARXML and ROS2 conventions. For example, parameters such as someip.serviceID and someip.instanceID in JSON correspond to SOME/IP entries defined in ARXML, while QoS-related fields conceptually map to standard DDS/ROS2 QoS policies.
The structure of every rule in the bridge configuration file is dictated by the “Type” field, which corresponds to SOME/IP’s communication patterns. The bridge determines the method of communication relay based on this “Type” value.
  • When the “Type” is Event, it specifies a rule for mapping SOME/IP’s one-way Publish/Subscribe communication (Event) to a DDS topic.
  • When the “Type” is Method, it establishes a rule for mapping the two-way Request/Response communication (Method) to a Remote Procedure Call (RPC) service, utilizing a pair of DDS topics.
  • When the “Type” is Field, it sets a rule for relaying state-based data (Field). It also incorporates composite mapping for notifications regarding value changes (Event) and controlling values (Getter/Setter, which aligns with the Method pattern).
Concerning QoS policy settings, DDS offers 22 QoS policies, while ROS2 encompasses 6 of these. The bridge described in this paper accommodates the implementation of ROS2’s QoS policy settings based on the “dds.qos” value specified in the bridge configuration file, and the supported QoS policy types are listed in Table 3.

4. Validation

To validate the bridge proposed in this paper, two experiments were conducted. The experiment detailed in Section 4.1 assesses the latency incurred by the bridge during inter-platform communication. It examines the extent to which resource usage is reduced through the bridge’s resource management process. Section 4.2 validates the practical effectiveness of the bridge by using Autoware, an open-source autonomous driving application based on ROS2, in a real-world driving scenario.

4.1. Performance Evaluation of the Dynamic SOME/IP-DDS Bridge

A performance evaluation was undertaken to quantitatively assess the latency introduced by the proposed bridge during its data conversion and relay operations. The initial evaluation centers on examining the influence of data size on the bridge’s end-to-end performance. To accomplish this, experiments were performed by increasing the data size by powers of two, ranging from 1 KB to 256 KB. For both the data communication method and type, DDS was configured to utilize the topic-based std_msgs/string message type from ROS2, while SOME/IP was set up for event-based communication to send string data. The hardware and software setup utilized for the experiment is outlined in Table 4.
The performance was measured for bidirectional communication involving data exchange between the DDS and SOME/IP domains through the proposed bridge. To identify the primary sources of latency within the bridge, the measurement process was divided into several stages: the SOME/IP transmission/reception stage, the internal message queue transfer stage, the data conversion stage, and the DDS transmission/reception stage. The latency associated with each processing step was recorded. The outcomes regarding latency variations with increasing data size are presented in Figure 5.
Figure 5a,c illustrate the end-to-end latency for each data size on both the PC and Jetson Xavier NX platforms. On the PC, for smaller data sizes of 32 KB or less, a remarkably low latency of under 50 µs was observed, indicating that the processing overheads of the proposed bridge are minimal. However, upon increasing the data size to 64 KB or more, a notable rise in latency was recorded. A similar exponential increase in latency was observed on the Jetson platform, through the absolute latency values were significantly higher due to the limited computing resources. For example, at 256 KB, the median latency from DDS to SOME/IP was approximately 440 µs on the PC, while it reached around 4700 µs on the Jetson. Notably, on both platforms, the conversion from SOME/IP to DDS exhibited better performance than the reverse direction, which is likely due to the discrepancies in data representation and serialization methods between the two pieces of communication middleware.
Figure 5b,d illustrate the results obtained from assessing the internal processing stages—DDS Publish/Subscribe, Queue, Data Convert, and SOME/IP Publish/Subscribe—to identify the latency bottleneck in the end-to-end process. From these results, Table 5 presents the proportion of latency contributed by each stage to the total end-to-end latency of the bridge. It is evident that as the size of the data increases, the proportion of latency related to internal processing stages such as Queue and Data Convert diminishes, while the proportion associated with the Publish stage, which handles actual communication, increases significantly. This observation indicates that as the data size expands, the latency bottleneck transitions from the internal processing logic of the bridge to the inherent network I/O operations of the underlying communication middleware. This trend was observed on both the PC and the Jetson platforms, but it was far more pronounced on the latter, where the network I/O constituted much of the total latency for large data, highlighting the greater impact of I/O limitations on embedded hardware. In other words, the latency incurred in the DDS and SOME/IP Publish stages, which are critical for actual data transmission, emerges as the primary factor influencing overall performance.
Furthermore, when focusing solely on the Data Convert stage, an internal operation of the bridge, it was noted that transforming data from DDS to SOME/IP required more time than the reverse conversion. This time discrepancy arises from the complexity involved in the Common Data Representation (CDR) deserialization process used by DDS. Specifically, DDS must parse serialized data that contains type information to accommodate various data types; in contrast, SOME/IP data is represented as a straightforward string type, which can be processed with a mere memory copy. As this process is intrinsic to DDS, it represents a fundamental limitation that the bridge itself cannot eliminate. Future work may therefore consider middleware-level enhancements or hardware-assisted techniques to reduce these overheads for large payloads such as LiDAR data.
The previous measurement validated the influence of data size on latency within a single-connection context, while this measurement is aimed at evaluating the bridge’s performance in a multi-connection environment, specifically its connection scalability. To achieve this, the data size was kept constant while the number of concurrent connections to the bridge was increased, allowing for the observation of changes in latency. The data communication method and type were configured in the same manner as the earlier experiment, utilizing topic-based std_msgs::string for DDS and an Event-based string message type for SOME/IP. The experimental setup remains consistent with that described in Table 4.
Figure 6 presents the results from measuring latency while increasing the number of concurrent connections on both the PC and Jetson platforms. Concurrent connections were increased exponentially by powers of two, ranging from 1 to 16, while measurements were taken for data sizes of 1 KB, 64 KB, and 128 KB. The findings confirmed that for each data size, the median and interquartile range of latency mainly remained constant, regardless of the increase in the number of connections from 1 to 16. Moreover, this stability in latency with rising connection counts was consistently observed in both the transition from DDS to SOME/IP and the reverse (SOME/IP to DDS). However, an exception was observed on the resource-constrained Jetson Xavier NX under the most demanding load of 16 connections with 128 KB of data, where a significant increase in latency and variance indicated that the system was approaching its hardware resource limits.
Overall, these results indicate that the proposed bridge demonstrates symmetric and stable performance for bidirectional data conversion up to the physical limits of the hardware. Such stability can be attributed to the bridge’s architecture, which assigns a separate execution thread and message queue for each connection. As a result, the data processing load from one connection does not adversely affect the resources allocated to others, thus minimizing the impact on overall system latency, even as the number of connections grows. This mechanism ensures that in complex autonomous vehicle environments, where real-time exchanges of multiple sensor data and control signals occur, stable operation is maintained without compromising the communication performance of the existing system, even with the integration of new functionalities and an increase in the number of communication connections.
Next, to quantitatively evaluate the resource inefficiency issue of static bridges highlighted in the introduction, we established a performance baseline. For this, we configured our proposed bridge to operate in a “Static Mode”, emulating the behavior of a conventional static bridge by pre-creating all communication entities. We measured the resource usage in an idle state as the number of connections was incrementally increased. To demonstrate hardware relevance, this experiment was conducted on two distinct platforms, namely a high-spec PC and a resource-constrained embedded platform (NVIDIA Jetson Xavier NX), with other conditions aligning with Table 4.
Figure 7 presents a comparative analysis of the resource usage of the emulated static bridge on both the PC and the embedded platform. As depicted in Figure 7a, the average CPU usage on both systems exhibited a linear increase with the number of connections. Notably, the resource-constrained embedded platform showed a significantly steeper increase, reaching approximately 19.3% at 100 connections, compared to around 6% on the PC. This highlights the greater performance impact of the same workload on the embedded hardware. Similarly, Figure 7b shows that memory usage on both platforms also scaled linearly as connections were added, with the Jetson platform consistently showing a higher baseline memory footprint.
These results reveal the inherent scalability characteristics of the static bridge architecture. The key finding is that resource consumption scales in a predictable, linear fashion with the number of pre-configured connections, regardless of whether they are actively used. This linear growth demonstrates that resources are continuously consumed to maintain idle connections, leading to significant inefficiency. This issue is especially critical in the resource-constrained embedded environment, where the higher CPU load and wasted memory could jeopardize system stability in a real automotive system that requires hundreds of communication paths.
In contrast, the scalability of our proposed dynamic bridge is fundamentally more efficient. Its resource consumption is decoupled from the number of potential connections, remaining near-zero during idle periods as resources are allocated only when communication is active (referred to as zero-connection data). This experiment therefore quantitatively validates the problem statement of this paper, demonstrating that our dynamic architecture effectively solves the inefficient scaling and resource wastage issues inherent to the static approach, making it highly suitable for resource-constrained embedded environments.
To investigate the stability and operational limits of the proposed bridge under a heavy load, a performance boundary analysis was conducted on the embedded platform, the NVIDIA Jetson Xavier NX. In this experiment, the number of connections was varied from 10 to 100, while the message payload size was simultaneously varied across a range of 1 KB to 128 KB. During the test, messages were continuously transmitted at a frequency of 50 Hz for each connection. Each heatmap presented in Figure 8 was generated from a dataset of 10,000 latency samples to ensure statistical significance.
The heatmaps in Figure 8 visually demonstrate the interdependent effect of payload size and connection count on the bridge’s performance. The analysis reveals a clear trade-off relationship: the maximum number of concurrent connections that the bridge can handle reliably is inversely proportional to the message payload size.
The mean latency, shown in Figure 8a, and the standard deviation, in Figure 8b, illustrate this trade-off. For smaller payloads of 16 KB or less, the bridge exhibits high stability by maintaining low and predictable latency even at the maximum of 80 concurrent connections. However, as the payload size increases to 32 KB, a performance degradation threshold emerges at around 50 to 60 connections, marked by a sharp increase in latency and its standard deviation. This threshold for instability appears even earlier for larger payloads; with 64 KB messages, significant performance degradation and high variance are observed with as few as 30 to 40 connections.
The sharp increase in the standard deviation of latency is particularly significant, as it indicates that the system has become overloaded and its behavior is no longer predictable, thus becoming unreliable. In conclusion, this analysis provides a practical performance map that defines the operational limits of the bridge. It offers a quantitative guide, showing that to ensure stable and reliable operation, the number of concurrent connections must be managed in accordance with the expected message payload sizes.

4.2. Autonomous Driving Test in the Real World Using the Dynamic SOME/IP-DDS Bridge

To assess the proposed bridge in a realistic communication scenario, a real-world test environment was established, and experiments were carried out. While utilizing a certified commercial AUTOSAR Adaptive platform would be ideal, practical constraints at the academic research stage, such as high licensing fees, are present.
Therefore, this study prioritizes protocol-level validation. To achieve this, a test application was developed using the vsomeip library, a widely recognized open-source SOME/IP implementation utilized in actual AUTOSAR Adaptive Applications. This approach guarantees that the proposed bridge interacts with the same communication stack as a commercial platform. Thus, even in the absence of an actual Adaptive Application, it is feasible to confirm that the core functions of the bridge perform correctly under comparable conditions.
Nevertheless, it should be noted that relying on a vsomeip-based mock AUTOSAR application does not capture all aspects of production-grade AUTOSAR Adaptive environments. Certification requirements, security mechanisms, and strict timing constraints imposed in real-world automotive systems were not fully addressed in this study. Moreover, when integrating with a commercial AUTOSAR Adaptive stack, vendor-specific extensions or restrictions in ARXML manifest files may affect portability and reusability. Addressing these challenges will require collaboration with industry partners to validate compatibility under real ARXML configurations and ensure seamless integration with production-grade AUTOSAR Adaptive platforms.

4.2.1. Test Environment and Scenario

To validate the practical efficacy of the proposed bridge, a real autonomous driving robot platform was designed based on the architecture illustrated in Figure 9. This environment is segmented into data acquisition and processing units. The data acquisition unit, which consists of a Jetson Xavier NX board, gathers data from a Velodyne LiDAR and a GPS sensor and sends it to the SOME/IP network through a vsomeip-based test application. In the data processing unit (a laptop), the proposed “Dynamic SOME/IP-DDS Bridge” receives this data, converts it into ROS2 messages, and subsequently forwards it to the autonomous driving software, Autoware. The specific hardware and software specifications for the experimental configuration are listed in Table 6.
To demonstrate the effectiveness of the bridge, a driving scenario was examined, as depicted in Figure 10. The experiment occurred in a real-road environment surrounding a campus lake. The driving test was conducted on a predefined route of approximately 200 m, which included straight sections and a 90-degree curve to ensure that the system was tested under realistic driving conditions. The autonomous driving robot presented in Figure 11 travels along a designated route, continuously collecting LiDAR and GPS data. The sensor application on the Jetson board serializes the acquired data into platform-specific raw data types and transmits it to the bridge using SOME/IP Event communication. During this test, two continuous data connections were established through the bridge: one for LiDAR data (sensor_msgs/PointCloud2) and one for GPS data (sensor_msgs/NavSatFix). The bridge decodes the received SOME/IP payload, transforms it into standard ROS2 message types, namely sensor_msgs/PointCloud2 (for LiDAR) and sensor_msgs/NavSatFix (for GPS), and relays these messages to the ROS2 network. Autoware then subscribes to these relayed sensor topics and utilizes them as the input for its autonomous driving algorithms to navigate the planned route.

4.2.2. Scenario Validation

To carry out the validation aligned with the scenario, a real-road driving test was performed using pre-prepared Point Cloud and Vector Maps. Figure 12 illustrates the screen showing where Autoware has estimated the robot’s position on the map to be and the driving path generated utilizing real-time sensor data received from the bridge. Figure 13 showcases the robot navigating along the predetermined route, thereby confirming that the proposed bridge operates reliably within a real autonomous driving environment.
In this study, the latency was assessed from the moment that the sensor acquisition application gathered the data to when Autoware received it. The data transmission rates were measured at 10 Hz for LiDAR data and 1 Hz for GPS data, and Figure 14 and Figure 15 illustrate the respective measured latency. The average latency for LiDAR data was recorded at 25.1 ms, with the majority of the distribution lying between 20 ms and 30 ms. For GPS data, which has a considerably smaller size, the average latency was 19.2 ms, and the deviation was also minimal, indicating stable transmission performance. To further quantify this stability, we analyzed the transmission jitter using the standard deviation of the 30 most recent intervals, as shown in Figure 14 and Figure 15. The results confirmed a predictable data stream, with an average jitter of 1.45 ms for LiDAR and 9.13 ms for GPS. These measurements demonstrate that the proposed bridge can effectively relay both large-volume LiDAR data and low-volume GPS data quickly enough to meet the real-time requirements of an autonomous driving system. Importantly, the Autoware used in this experiment typically operates with a computational cycle of 100 ms [23]. The fact that the measured latency is less than this cycle length shows that the proposed bridge possesses adequate performance and practical effectiveness for application in a real autonomous driving context.
However, it is important to define the applicability boundaries of this performance. While the measured 20–30 ms end-to-end latency is well within the 100 ms operational cycle of Autoware’s perception and planning modules, it may not be sufficient for safety-critical functions with hard real-time constraints, such as emergency braking or vehicle dynamics control, which often demand latencies in the single-digit millisecond range. Therefore, the bridge in its current form is best suited to high-level autonomous driving data exchange rather than low-level, safety-critical control signals.

5. Conclusions

This study involved the design, implementation, and validation of a novel dynamic bridge architecture aimed at addressing the limitations associated with existing static bridges for more efficient interoperability between the AUTOSAR Adaptive and ROS2 platforms. The proposed dynamic SOME/IP-DDS bridge enhances resource efficiency and communication stability by dynamically creating and removing communication objects in response to discovery events, along with automatically managing QoS settings.
Performance evaluations revealed that the proposed bridge exhibits stable latency performance even as the data size increases, demonstrating high scalability with almost no latency increase in multi-connection environments. The existing static approach tends to waste memory in proportion to the number of connections in an idle state, while the proposed dynamic method is highly resource-efficient, with nearly no resource consumption during idle periods. In real-world trials conducted with an autonomous driving robot, the bridge showed stable integration with the Autoware-based autonomous driving system, successfully relaying both LiDAR and GPS sensor data with an average latency of 20–30 ms. This performance level sufficiently meets the real-time requirements of Autoware, which typically operates with a computational cycle of 100 ms [23]. These findings suggest that the dynamic bridge introduced in this study can effectively bridge the technological gap between autonomous driving research within ROS2 and actual vehicle development in AUTOSAR Adaptive. By facilitating the efficient and stable integration of ROS2’s extensive open-source ecosystem and development tools into the automotive platform, the prototyping and verification processes for next-generation SDVs can be expedited.
While this study demonstrates the effectiveness of the proposed architecture, it is important to acknowledge its limitations, which in turn define directions for future work. The current validation was conducted using the open-source vsomeip stack, and the experiments on the target embedded ECU were focused on verifying the bridge’s own efficiency (in terms of CPU, memory, and internal latency) as a foundational first step. Building on this, future work should pursue a more comprehensive, system-level evaluation by integrating the bridge with a commercial AUTOSAR Adaptive platform stack. Such a study would allow for analysis under various QoS policies and in an environment that includes production-grade variables like certification requirements, security mechanisms, and strict timing constraints, which were not fully considered here.
Furthermore, to enhance the bridge’s functionality and performance, several key areas will need to be addressed. The current QoS management approach, while ensuring predictability, could be improved by integrating a lightweight QoS negotiation protocol for more adaptive communication in dynamic scenarios. To extend the bridge’s applicability to safety-critical functions, future work will also focus on latency optimization. We plan to investigate advanced techniques such as zero-copy data transfer, priority-based queuing within the bridge, and kernel-level optimizations. Additionally, addressing the conversion overheads observed in the DDS-to-SOME/IP translation at the middleware or hardware level remains an important challenge for meeting the stringent low-latency requirements of hard real-time autonomous systems.
The dynamic bridge architecture proposed in this study will aid in creating a more flexible and efficient automotive software development environment by complementing the strengths of both platforms. It is anticipated that this will promote advancements in autonomous driving and lay a robust foundation for the implementation of safer and more intelligent future transportation systems.

Author Contributions

Conceptualization, S.K. and C.M.; methodology, S.K.; software, S.K., H.C., S.L. and M.K.; validation, S.K. and M.K.; formal analysis, S.K.; investigation, S.K., H.C. and S.L.; resources, C.M.; data curation, S.K.; writing—original draft preparation, S.K.; writing—review and editing, S.K., H.S. and C.M.; visualization, S.K.; supervision, C.M.; project administration, S.K.; funding acquisition, C.M. All authors have read and agreed to the published version of the manuscript.

Funding

This paper was supported by a Korea Institute for Advancement of Technology (KIAT) grant funded by the Korean Government (MOTIE) (P0020536, HRD Program for Industrial Innovation).

Data Availability Statement

The data can be requested from the authors.

Conflicts of Interest

The authors declare that there are no conflicts of interest.

References

  1. Liu, Z.; Zhang, W.; Zhao, F. Impact, Challenges and Prospect of Software-Defined Vehicles. Automot. Innov. 2022, 5, 180–194. [Google Scholar] [CrossRef]
  2. Navale, V.M.; Williams, K.; Lagospiris, A.; Schaffert, M.; Schweiker, M.A. (R)evolution of E/E Architectures. SAE Int. J. Passeng. Cars-Electron. Electr. Syst. 2015, 8, 282–288. [Google Scholar] [CrossRef]
  3. AUTOSAR Adaptive. Available online: https://www.autosar.org/standards/adaptive-platform (accessed on 7 August 2025).
  4. Shajahan, M.A.; Richardson, N.; Dhameliya, N.; Patel, B.; Anumandla, S.K.; Yarlagadda, V.K. AUTOSAR Classic vs. AUTOSAR Adaptive: A Comparative Analysis in Stack Development. Eng. Int. 2019, 7, 161–178. [Google Scholar] [CrossRef]
  5. ROS Open Robotics. Available online: https://www.ros.org (accessed on 7 August 2025).
  6. Autoware. Available online: https://autoware.org (accessed on 7 August 2025).
  7. Al-Batati, A.S.; Koubaa, A.; Abdelkader, M. ROS 2 Key Challenges and Advances: A Survey of ROS 2 Research, Libraries, and Applications. Preprint 2024. [Google Scholar] [CrossRef]
  8. Ji, Y.; Huang, Y.; Yang, M.; Leng, H.; Ren, L.; Liu, H.; Chen, Y. Physics-informed deep learning for virtual rail train trajectory following control. Reliab. Eng. Syst. Saf. 2025, 261, 111092. [Google Scholar] [CrossRef]
  9. Hong, D.; Moon, C. Autonomous Driving System Architecture with Integrated ROS2 and Adaptive AUTOSAR. Electronics 2024, 13, 1303. [Google Scholar] [CrossRef]
  10. Iwakami, R.; Peng, B.; Hanyu, H.; Ishigooka, T.; Azumi, T. AUTOSAR AP and ROS 2 Collaboration Framework. In Proceedings of the 2024 27th Euromicro Conference on Digital System Design (DSD), Paris, France, 28–30 August 2024; pp. 319–326. [Google Scholar] [CrossRef]
  11. DDS Specification, Version 1.4. Available online: https://www.omg.org/spec/DDS/1.4 (accessed on 7 August 2025).
  12. An, K.; Gokhale, A.; Schmidt, D.; Tambe, S.; Pazandak, P.; Pardo-Castellote, G. Content-based filtering discovery protocol (CFDP): Scalable and efficient OMG DDS discovery protocol. In Proceedings of the 8th ACM International Conference on Distributed Event-Based Systems (DEBS ‘14), Association for Computing Machinery, New York, NY, USA, 26–29 May 2014; pp. 130–141. [Google Scholar] [CrossRef]
  13. ROS2 Documentation. Version Kilted. Available online: https://docs.ros.org/en/kilted/index.html (accessed on 7 August 2025).
  14. AUTOSAR, R24-11. SOME/IP Protocol Specification. Available online: https://www.autosar.org/fileadmin/standards/R24-11/FO/AUTOSAR_FO_PRS_SOMEIPProtocol.pdf (accessed on 7 August 2025).
  15. Object Management Group(OMG). Data Distribution Service (DDS). Available online: https://www.omg.org/omg-dds-portal (accessed on 7 August 2025).
  16. Scalable Service-Oriented MiddlewarE over IP(SOME/IP). Available online: https://some-ip.com (accessed on 7 August 2025).
  17. AUTOSAR, R24-11. SOME/IP Service Discovery Protocol Specification. Available online: https://www.autosar.org/fileadmin/standards/R24-11/FO/AUTOSAR_FO_PRS_SOMEIPServiceDiscoveryProtocol.pdf (accessed on 7 August 2025).
  18. Henle, J.; Stoffel, M.; Schindewolf, M.; Nägele, A.T.; Sax, E. Architecture platforms for future vehicles: A comparison of ROS2 and Adaptive AUTOSAR. In Proceedings of the 2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC), Macau, China, 8–12 October 2022; pp. 3095–3102. [Google Scholar] [CrossRef]
  19. Ioana, A.; Korodi, A.; Silea, I. Automotive IoT Ethernet-Based Communication Technologies Applied in a V2X Context via a Multi-Protocol Gateway. Sensors 2022, 22, 6382. [Google Scholar] [CrossRef] [PubMed]
  20. Cakır, M.; Häckel, T.; Reider, S.; Meyer, P.; Korf, F.; Schmidt, T.C. A QoS Aware Approach to Service-Oriented Communication in Future Automotive Networks. In Proceedings of the 2019 IEEE Vehicular Networking Conference (VNC), Los Angeles, CA, USA, 4–6 December 2019; pp. 1–8. [Google Scholar] [CrossRef]
  21. Cho, Y.; Lee, S.; Yang, J.; Cho, J. Integration of AUTOSAR Adaptive Platform with ROS2 through Network Binding. In Proceedings of the 2024 IEEE 13th Global Conference on Consumer Electronics (GCCE), Kitakyushu, Japan, 29 October–1 November 2024; pp. 1175–1176. [Google Scholar] [CrossRef]
  22. AUTOSAR, R23-11. Specification of Manifest. Available online: https://www.autosar.org/fileadmin/standards/R23-11/AP/AUTOSAR_AP_TPS_ManifestSpecification.pdf (accessed on 7 August 2025).
  23. Autoware, v.0.45.1. Defining Temporal Performance Metrics on Components. Available online: https://autowarefoundation.github.io/autoware-documentation/main/how-to-guides/others/defining-temporal-performance-metrics (accessed on 7 August 2025).
Figure 1. Dynamic SOME/IP-DDS bridge architecture.
Figure 1. Dynamic SOME/IP-DDS bridge architecture.
Electronics 14 03635 g001
Figure 2. Dynamic SOME/IP-DDS bridge workflow (Adaptive Application to ROS2 node). The numbered steps in the diagram correspond to the detailed descriptions provided in Section 3.2.1.
Figure 2. Dynamic SOME/IP-DDS bridge workflow (Adaptive Application to ROS2 node). The numbered steps in the diagram correspond to the detailed descriptions provided in Section 3.2.1.
Electronics 14 03635 g002
Figure 3. Dynamic SOME/IP-DDS bridge workflow (ROS2 node to Adaptive Application). The numbered steps in the diagram correspond to the detailed descriptions provided in Section 3.2.2.
Figure 3. Dynamic SOME/IP-DDS bridge workflow (ROS2 node to Adaptive Application). The numbered steps in the diagram correspond to the detailed descriptions provided in Section 3.2.2.
Electronics 14 03635 g003
Figure 4. Dynamic SOME/IP-DDS bridge workflow (remove connection). The numbered steps in the diagram correspond to the detailed descriptions provided in Section 3.2.3.
Figure 4. Dynamic SOME/IP-DDS bridge workflow (remove connection). The numbered steps in the diagram correspond to the detailed descriptions provided in Section 3.2.3.
Electronics 14 03635 g004
Figure 5. Analysis of the bridge’s end-to-end latency according to data size. (a) Overall latency distribution and (b) average latency by processing stage on the PC. (c,d) show the corresponding results on the Jetson Xavier NX.
Figure 5. Analysis of the bridge’s end-to-end latency according to data size. (a) Overall latency distribution and (b) average latency by processing stage on the PC. (c,d) show the corresponding results on the Jetson Xavier NX.
Electronics 14 03635 g005
Figure 6. Latency distribution by data size and connection count. (a) On the PC. (b) On the Jetson Xavier NX.
Figure 6. Latency distribution by data size and connection count. (a) On the PC. (b) On the Jetson Xavier NX.
Electronics 14 03635 g006
Figure 7. Comparison of idle state resource usage on PC and Jetson Xavier NX platforms when the bridge is configured in “Static Mode”. (a) Average CPU usage by connection count. (b) Average memory usage by connection count.
Figure 7. Comparison of idle state resource usage on PC and Jetson Xavier NX platforms when the bridge is configured in “Static Mode”. (a) Average CPU usage by connection count. (b) Average memory usage by connection count.
Electronics 14 03635 g007
Figure 8. Performance heatmaps of the bridge on the embedded platform. (a) Mean latency. (b) Standard deviation of latency.
Figure 8. Performance heatmaps of the bridge on the embedded platform. (a) Mean latency. (b) Standard deviation of latency.
Electronics 14 03635 g008
Figure 9. Autonomous driving robot platform.
Figure 9. Autonomous driving robot platform.
Electronics 14 03635 g009
Figure 10. Scenario driving route.
Figure 10. Scenario driving route.
Electronics 14 03635 g010
Figure 11. Hardware configuration of the autonomous driving robot.
Figure 11. Hardware configuration of the autonomous driving robot.
Electronics 14 03635 g011
Figure 12. Real-time localization and path planning on Autoware.
Figure 12. Real-time localization and path planning on Autoware.
Electronics 14 03635 g012
Figure 13. Autonomous driving test on the scenario route.
Figure 13. Autonomous driving test on the scenario route.
Electronics 14 03635 g013
Figure 14. Autonomous driving robot’s LiDAR data transmission performance. (a) Latency. (b) Jitter.
Figure 14. Autonomous driving robot’s LiDAR data transmission performance. (a) Latency. (b) Jitter.
Electronics 14 03635 g014
Figure 15. Autonomous driving robot’s GPS data transmission performance. (a) Latency. (b) Jitter.
Figure 15. Autonomous driving robot’s GPS data transmission performance. (a) Latency. (b) Jitter.
Electronics 14 03635 g015
Table 1. Comparison of AUTOSAR Adaptive–ROS2 bridging approaches.
Table 1. Comparison of AUTOSAR Adaptive–ROS2 bridging approaches.
StudyDeployment ModelSOME/IP-SDDDS
Discovery
Discovery-Level SynchronizationResource
Management
DDS
QoS Policy
1D. Hong et al.
(2024) [9]
ROS2 bridge node
(ARISA)
SupportedImplicit via ROS2SOME/IP-SD onlyNRNR
2R. Iwakami et al.
(2024) [10]
ROS2 node
(DDS-SOME/IP converter)
SupportedImplicit via ROS2SOME/IP-SD onlyNRNR
3Y. Cho et al.
(2024) [21]
AUTOSAR application
(network binding with ROS2)
NRNRNRNRNR
4Proposed
(this work)
Dynamic bridge
(Standalone Application)
SupportedSupportedCross-domainSupportedStrict
enforcement
Abbreviations: NR = not reported in the cited paper.
Table 2. Dynamic SOME/IP-DDS bridge configuration parameters.
Table 2. Dynamic SOME/IP-DDS bridge configuration parameters.
Parameter NameScopeDescription
TypeAllSpecifies the type of mapping rule (one of Event, Method, or Field).
someip.serviceIDAllThe unique ID of the target SOME/IP service to be mapped.
someip.instanceIDAllThe ID of the specific instance providing the service.
dds.qosAllThe QoS profiles to be applied to the DDS communication.
someip.eventgroupIDEventThe ID of the event group containing the event to be subscribed to.
someip.eventIDEventThe ID of the specific event to be mapped.
someip.methodIDMethodThe ID of the specific method to be mapped.
someip.fieldIDFieldThe ID of the specific field to be mapped.
dds.topicEvent, FieldThe DDS topic for publishing one-way data (Event, Field Notification).
someip.eventTypeEvent, FieldThe data type of the Event message.
dds.messageTypeEvent, FieldThe message type to be used in the one-way data publishing topic.
dds.requestTopicMethod, FieldThe DDS topic for RPC (Method, Field Setter/Getter) requests.
someip.requestTypeMethod, FieldThe data type of the request message.
dds.requestMessageTypeMethod, FieldThe message type to be used in the request topic.
dds.responseTopicMethod, FieldThe DDS topic for RPC (Method, Field Setter/Getter) response.
someip.responseTypeMethod, FieldThe data type of the response message.
dds.responseMessageTypeMethod, FieldThe message type to be used in the response topic.
Table 3. QoS policies supported by the dynamic SOME/IP-DDS bridge.
Table 3. QoS policies supported by the dynamic SOME/IP-DDS bridge.
QoS NameDescription
HistorySpecifies the method and size for storing received or sent data.
ReliabilitySets reliability as the priority by preventing data loss with TCP (Reliable) or sets communication speed as the priority, such as UDP (best effort).
DurabilityDetermines whether to provide past data to subscribers who join later.
DeadlineTriggers an event if data transmission or reception does not occur within a specific time.
LifespanValidates only the data received within a defined period; other data is discarded.
LivelinessPeriodically or explicitly checks the active status of a node or topic.
Table 4. Dynamic SOME/IP-DDS bridge performance evaluation experiment environment.
Table 4. Dynamic SOME/IP-DDS bridge performance evaluation experiment environment.
CategorySpecification
PCProcessorIntel® Core i7-10700K CPU @ 3.8 GHz (Intel, Santa Clara, CA, USA)
Memory32GB
OSUbuntu 22.04 LTS
DDS Fast DDS v3.2.2
SOME/IPvsomeip v3.5.6
Jetson Xavier NXProcessor6-core NVIDIA Carmel ARM® v8.2 64-bit CPU (NVIDIA, Santa Clara, CA, USA)
6MB L2 + 4MB L3
Memory8 GB
OSUbuntu 20.04
DDS Fast DDS v3.2.2
SOME/IPvsomeip v3.5.6
Table 5. Proportion of latency by processing stage [%].
Table 5. Proportion of latency by processing stage [%].
Stage1 KB 2 KB4 KB8 KB16 KB32 KB64 KB128 KB256 KB
Direction (PC)DDS to
SOME/IP
DDS Subscribe9.9010.4111.538.7411.028.794.342.822.26
Data Convert17.2319.4220.3817.5420.0021.6011.8323.5426.52
Queue30.9932.4326.0328.4424.8618.9315.9010.398.36
SOME/IP Publish41.8837.7442.0645.2844.1250.6867.9363.2562.86
Total100100100100100100100100100
Direction (PC)
SOME/IP
to DDS
SOME/IP Subscribe0.561.332.194.543.785.014.394.133.68
Queue18.3218.3218.2118.7416.2813.859.306.926.54
Data Convert12.3612.3613.8913.1815.5719.0815.5112.6611.44
DDS Publish68.7667.9965.7163.5464.3762.0670.8076.2978.34
Total100100100100100100100100100
Direction
(Jetson)
DDS to
SOME/IP
SOME/IP Subscribe7.218.247.955.465.965.173.782.061.53
Queue29.8049.7919.6124.1718.9120.8117.2227.4830.11
Data Convert16.7016.247.957.467.066.429.195.884.44
DDS Publish46.2925.7364.5062.9068.0767.6069.8164.8863.92
Total100100100100100100100100100
Direction
(Jetson)
SOME/IP
to DDS
SOME/IP Subscribe1.922.092.102.373.412.822.261.931.58
Queue23.1917.7921.4617.0916.2115.7810.655.833.97
Data Convert17.7117.6616.7118.4618.0616.2613.068.366.98
DDS Publish57.1862.4659.7362.0862.3265.1474.0283.8787.47
Total100100100100100100100100100
Table 6. Autonomous driving robot platform experiment environment.
Table 6. Autonomous driving robot platform experiment environment.
CategorySpecification
H.W.
(Platform)
UGVAgile X Hunter SE (Agilex Robotics, Shenzhen, China)
Sensor Processing BoardJetson Xavier NX 8GB (NVIDIA)
LaptopOMEN 15-EK0013DX (HP, Palo Alto, CA, USA)
Ethernet SwitchipTIME H6005 mini (EFM Networks, Yongin, Republic of Korea)
H.W.
(Sensors)
LiDARVelodyne VLP-16 (Mapix Technologies, Edinburgh, UK)
GPSu-blox ZED-F9P RTK (U-blox, Thalwil, Switzerland)
S.W.
(Sensor Processing Board)
OSUbuntu 20.04
SOME/IPvsomeip v3.5.6
S.W.
(Laptop)
OSUbuntu 22.04
DDSFast DDS v3.2.2
SOME/IPvsomeip v3.5.6
ROSROS2 Humble
AutowareAutoware v0.45.1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, S.; Choi, H.; Lee, S.; Kim, M.; Shin, H.; Moon, C. A Dynamic Bridge Architecture for Efficient Interoperability Between AUTOSAR Adaptive and ROS2. Electronics 2025, 14, 3635. https://doi.org/10.3390/electronics14183635

AMA Style

Kim S, Choi H, Lee S, Kim M, Shin H, Moon C. A Dynamic Bridge Architecture for Efficient Interoperability Between AUTOSAR Adaptive and ROS2. Electronics. 2025; 14(18):3635. https://doi.org/10.3390/electronics14183635

Chicago/Turabian Style

Kim, Suhong, Hyeongju Choi, Suhaeng Lee, Minseo Kim, Hyunseo Shin, and Changjoo Moon. 2025. "A Dynamic Bridge Architecture for Efficient Interoperability Between AUTOSAR Adaptive and ROS2" Electronics 14, no. 18: 3635. https://doi.org/10.3390/electronics14183635

APA Style

Kim, S., Choi, H., Lee, S., Kim, M., Shin, H., & Moon, C. (2025). A Dynamic Bridge Architecture for Efficient Interoperability Between AUTOSAR Adaptive and ROS2. Electronics, 14(18), 3635. https://doi.org/10.3390/electronics14183635

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop