Next Article in Journal
LAVID: A Lightweight and Autonomous Smart Camera System for Urban Violence Detection and Geolocation
Previous Article in Journal
Optimal Selection of Sampling Rates and Mother Wavelet for an Algorithm to Classify Power Quality Disturbances
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Microhooks: A Novel Framework to Streamline the Development of Microservices

School of Science and Engineering, School of Business Administration, Al Akhawayn University in Ifrane, BP104, Ifrane 53000, Morocco
*
Author to whom correspondence should be addressed.
Computers 2025, 14(4), 139; https://doi.org/10.3390/computers14040139
Submission received: 1 February 2025 / Revised: 28 February 2025 / Accepted: 4 March 2025 / Published: 7 April 2025

Abstract

:
The microservices architectural style has gained widespread adoption in recent years thanks to its ability to deliver high scalability and maintainability. However, the development process for microservices-based applications can be complex and challenging. Indeed, it often requires developers to manage a large number of distributed components with the burden of handling low-level, recurring needs, such as inter-service communication, brokering, event management, and data replication. In this article, we present Microhooks: a novel framework designed to streamline the development of microservices by allowing developers to focus on their business logic while declaratively expressing the so-called low-level needs. Based on the inversion of control and the materialized view patterns, among others, our framework automatically generates and injects the corresponding artifacts, leveraging 100% build time code introspection and instrumentation, as well as context building, for optimized runtime performance. We provide the first implementation for the Java world, supporting the most popular containers and brokers, and adhering to the standard Java/Jakarta Persistence API. From the user perspective, Microhooks exposes an intuitive, container-agnostic, broker-neutral, and ORM framework-independent API. Microhooks evaluation against state-of-the-art practices has demonstrated its effectiveness in drastically reducing code size and complexity, without incurring any considerable cost on performance. Based on such promising results, we believe that Microhooks has the potential to become an essential component of the microservices development ecosystem.

1. Introduction

The internet has revolutionized our lives in many ways and has created a plethora of solutions for everyday problems. From e-commerce to transportation and social media to entertainment, large-scale systems have been created to serve hundreds of millions of people. Popular applications like Netflix, Uber, and Amazon have changed the way people shop, travel, and stay connected while providing a consistently high level of quality and user experience.
To ensure high-quality software, big software companies employ various measures and procedures to increase productivity. In particular, DevOps practices are adopted to facilitate a smooth and fast transition of code across environments without compromising the efficiency and stability of the operational environment [1,2]. Despite all these practices, the proliferation in demand for these disruptive applications has provoked a shift in their architectural design. Many software companies found that their traditional monolithic architectures present several obstacles and limitations to their productivity and quality of service. The monolithic architecture makes services tightly coupled, affecting their scalability and deployability [3,4]. This coupling results from cross-service dependencies and a singleton state of the application code. Moreover, software developers are faced with a large codebase that is hard to understand and refactor since changes and updates can generate severe side effects on other modules [3,4].
For these reasons, many companies migrate their monolithic architectures to microservices architectures [5] and restructure their teams to mirror the loosely coupled services. In such an architectural style, the applications are designed and built as a collection of various small, self-contained, and independent service units called microservices. Consequently, experimenting with new features or adding new functionality to microservices poses less risk since they are loosely coupled by design. Furthermore, this architecture goes beyond agility and software team organization and provides critical benefits like fault isolation, data independence, scalability, and higher performance [6]. The microservices architecture also allows for fine-grained scaling, as services can be scaled independently, as needed, thus allowing for better resource utilization. Moreover, it enables teams to become more autonomous, work independently, and adopt new and different technologies.
Unfortunately, these advantages come at a cost [7]. Microservice applications utilize several additional infrastructure modules. For instance, a service registry/discovery module becomes indispensable for determining which microservice instance should receive and process user requests. In this case, the service registry maintains a global view of addresses and ports on which instances of different microservices are running [8]. In addition, microservice replication and load balancing are used to distribute the load on several microservice instances to ensure scalability and high availability. Furthermore, distributed tracing and logging are implemented to monitor application actions and errors, where requests are tracked and logged. Yet, microservice fault analysis and debugging remain challenging [9]. To alleviate this infrastructure-level cost associated with microservices, frameworks like Spring Cloud [10] and Netflix OSS [11,12] have been developed to provide a stable and generic infrastructure that can be used regardless of the project’s nature. These frameworks provide a unified interface to quickly configure and deploy microservice applications and take advantage of the infrastructure features they offer.
However, many of the costs associated with microservices are not limited to these infrastructure-level issues. Even though microservices aim to be completely independent, communication between them is still necessary. For instance, in a social media application, to load feed content for a user through the feed microservice, it is necessary to interact with the user microservice to obtain user-specific data and the recommendation microservice to generate the feed content. This interaction consists of queries, commands, and events, and can be implemented with the materialized view pattern. This pattern eliminates coupling between microservices by storing local copies of data in other domains. At the same time, microservices with the original data will send asynchronous messages to continuously update the view. This means that developers need to take on additional tasks such as publishing events, subscribing, listening to topics, and triggering the corresponding actions. This can take away from the developer’s focus on the application’s business logic, resulting in extra overhead and boilerplate code.
In this paper, we present Microhooks: a framework that aims to streamline the development of microservices by alleviating the burden of such a boilerplate code. In its first version, Microhooks focuses on the issue of distributed data management across multiple microservices—one of the most challenging ones [9]—and abstracts its complexity by automating the communication between them. Developers no longer need to implement inter-service communication, event management, data fetching, and replication. Instead, they only need to use the intuitive, annotation-based API provided by the framework to declaratively express the relationships between microservices. Microhooks takes care of the underlying mechanisms without causing any significant impact on application performance. Its source code is available on GitHub [13].
The rest of this paper is organized as follows. Section 2 provides a background about the microservices architectural style, as well as its patterns; Section 3 describes the state of the art. Section 4 introduces Microhooks from the user perspective, while Section 5 and Section 6 detail its design and implementation, respectively. Section 7 thoroughly evaluates the effectiveness and efficiency of our framework. The last section summarizes this work’s contributions and provides future direction.

2. Background

2.1. Microservices Architectural Style

Microservices is an architectural style and model for the development of loosely coupled, isolated, but collaborating services constituting a single application. Therefore, these microservices are highly maintainable, easily testable, and independently deployable. This architectural pattern aims at addressing the problems encountered in traditional tiered and monolithic architectures. Such problems can be related to the following non-functional requirement areas: scalability, maintainability, and security. For scalability, we would like to preserve performance when the load grows as a result of an increasing customer base or an increasing number of transactions [3,4,6]. Microservices solve this issue by enabling independent scaling of services that experience higher demand. As for maintainability, the microservices architecture allows for the isolated fixing of security flaws and logical bugs without affecting the overall operation of the application. As for security, the attack surface and potential impact related to each microservice are naturally lower than the ones pertaining to a whole monolithic application. All these points are more easily attainable with microservices compared to a monolith, thanks to the share-nothing ideology of microservices, resulting in a set of decoupled services. To further promote loose coupling among microservices, event-driven architecture (EDA) is adopted. It consists of using brokered event-based communication among services, as opposed to the traditional request/response paradigm.

2.2. Relevant Microservice Patterns

The development of software based on microservices can be quite confusing and difficult to implement. As for previous development issues, patterns were established to help highlight the best practices to be followed and the worst ones to be avoided (anti-patterns). These patterns not only help developers use the microservices architecture successfully but also help them decide whether microservices are suitable for their use cases. There are several microservice patterns, such as the API Gateway pattern, which is used to route and manage external traffic to the microservices, and the circuit breaker pattern, which helps to prevent cascading failures and improve the resilience of the system. But, we focus here on three patterns that are the most relevant to us:

2.2.1. The Database-per-Service Pattern

This pattern stems from the well-known domain-driven design (DDD) [14,15], whereby a large business domain is broken down into several more cohesive and smaller (sub)domains, each with well-defined boundaries. Then, objects or entities within each domain are identified and modeled. Following DDD, a traditional large data model would rather be divided into a set of smaller data models, each representing a specific domain. DDD can be leveraged to segregate microservices so that each bounded context corresponds to a microservice describing a distinct problem area. In fact, DDD is the first step to identifying and designing microservices. Each microservice represents an abstraction over its own data model and, hence, its own database. According to AWS [16], with loose coupling being the core characteristic of microservices architecture, each microservice should be able to independently store and retrieve information from its own data store. This is the database-per-service pattern; by adopting it, we can select the best data storage that suits our application and business requirements. Consequently, microservices do not share a data layer, changes to a microservice database do not affect other microservices, individual data stores cannot be accessed directly by other microservices, and persistent data may only be accessed via APIs. Decoupling data stores also improves overall application resiliency and ensures that a single database cannot be a single point of failure. It also enables microservices to be developed, deployed, and scaled independently.

2.2.2. The Data Replication/Materialized View Pattern

Following the CAP theorem, we can only have two of the following properties: consistency, availability, and partitioning [17]. Because of the distributed nature of microservices, especially when adopting the database-per-service pattern, data partitioning is there. Hence, we shall choose between consistency and availability. Data consistency is achieved through the traditional approach to implementing distributed systems, where each piece of information required by a microservice needs to be requested from the microservice that owns the data. Not only does this approach consume network resources, but it also adds an internal load of requests that need to be processed by the concerned microservice in addition to the regular load of external requests submitted by clients. For this reason, we favor availability over consistency, which we rather substitute with eventual consistency. This is achieved by having data replicated where they are or will be needed. The implementation of this pattern is done through microservices maintaining materialized views on data owned by other microservices [18].

2.2.3. The Command Query Responsibility Segregation Pattern

This pattern further enhances the decoupling of components and helps improve the scalability of the architecture [19]. CQRS is a principle that dictates that the data model for a service be divided into two parts: a command model, which is responsible for writes, and a query model, which is responsible for reads. This separation of concerns allows for more flexibility and scalability in the design of a microservice. It also helps to reduce coupling between services as each service can focus on its own data model and business logic.

3. Related Work

In a vision paper titled “A Distributed Database System for Event-based Microservices”, the authors argue for the necessity of a novel distributed database management system specialized for event-driven microservices [20]. They claim that such a system would better support microservices and ease the burden of managing the complexity of data management. The paper elaborates on the bitter side of loosely coupled design, which implicitly includes decentralized data management architecture. The decentralization puts the developer with no option other than detailing substantial logic for cross-microservices dependencies and data handling. As a proposed vision and solution to this problem, the authors advocate for the necessity to push data management logic to the database for processing. The implementation of such a solution is conducted through a new abstraction named virtual microservices, which will represent the computations performed by microservices within the database system. Consequently, the database gains knowledge of the data lifecycle outside of the data store and, subsequently, natively supports microservices properties, such as strong isolation, data ownership, and autonomy.
A systematic gray literature review about the pains and gains of microservices, 51 industrial studies published between 2014 and the end of 2017 were considered [21]. Two taxonomies were extracted—one for microservices pains and the other for their gains. These taxonomies were then used to categorize and compare the selected industrial studies to extract the IT industry’s accurate recognition of the hardships and rewards of microservices. The results of the study concluded that the microservices pains stem primarily from the inherent complexity of microservices-based solutions. Managing distributed storage is one of the primary pains at development time. Pains listed in this category are data consistency, distributed transactions, heterogeneity, and query complexity. At operation time, network resource consumption also arises as one of the microservices drawbacks.
Another systematic literature review revealed that practitioners often struggle with data management when working with microservices [22]. More specifically, the authors found that data replication across microservices was often not supported, making it difficult to maintain data synchronization. To further support their findings, they analyzed a set of popular open-source microservice applications and conducted an online survey to cross-validate the results. The survey revealed that data replication across services was one of the biggest challenges faced by practitioners, with 68% of respondents reporting the need for such a feature. To address these challenges, these researchers proposed a system-level solution by proposing a set of features for developing a microservice-oriented database management system [22]. It is worth mentioning that we adopted a completely different vision and approach when designing Microhooks by positioning ourselves at the application level. Indeed, we deem that the application itself is the right place where specific business needs and contexts can be captured and taken into consideration. Moreover, we do not require any specific DBMS, let alone one that should yet be developed, to support our solution.
Beyond visions and recommendations, and to the best of our knowledge, Eventuate is a unique concrete solution that is close to what Microhooks offers. Eventuate is designed as a family of frameworks, Eventuate Tram and Eventuate Local, attempting to address distributed data management challenges in the microservices architecture, stemming from the one database per service pattern [23]. Eventuate Tram is a framework for services that uses traditional persistence, e.g., Java/Jakarta Persistence API (JPA) [24]. On the other hand, Eventuate Local is an event-sourcing framework. By event sourcing, we mean an event-centric business logic and persistence programming model. Eventuate manifests itself as an API for publishing and consuming events.
Compared to Eventuate, Microhooks provides a complete abstraction of materialized view creation and synchronization through automated event publication and consumption. Consequently, it reduces the complexity and length of the code while providing an even simpler and more straightforward interface. Moreover, Eventuate compatibility is limited. For example, we can mention Eventuate’s lack of compatibility regarding the message brokers, which only supports Apache Kafka, while Microhooks is broker-agnostic.

4. Microhooks from a User Perspective

4.1. Purpose, Scope, and Main Concepts Through Use Cases

We believe that the best way to introduce Microhooks is through use cases. Without loss of generality, we consider three microservices. One of them acts as the source of truth regarding some data entity, while the two others keep observing it through their respective materialized views. In Microhooks vocabulary, the first microservice is called the source microservice or just source, while each of the two others is called a sink microservice or just a sink. It goes without saying that a microservice can act as a source in some contexts and as a sink in others. The entity holding the truth at the level of the source is called the source entity, while the materialized views at the level of the sinks are called the sink entities.
As shown in Figure 1, we consider the fact that each microservice may use any IoC container and any JPA-compliant ORM framework. So, every microservice may use an IoC container that is different from the ones used by the others, respectively. The same thing applies to the ORM frameworks. In the Java world, and as of the time of writing, Spring [25], Micronaut [26], and Quarkus [27] are the most popular IoC frameworks, while Hibernate [28] and EclipseLink [29] are the two JPA-compliant ORM frameworks. Finally, we consider the fact that any broker may be used as a messaging and event streaming platform among the three services. In practice, this would be Kafka, RabbitMQ, RocketMQ, etc.

4.1.1. First Use Case

In this first use case, we are interested in keeping the sink entities synchronized with the source entity. Moreover, we believe that a sink entity should not necessarily receive updates regarding all attributes of the source entity. This is because the sink may not be interested in—or authorized to access—some attributes. Here, we introduce the concept of projection, which is a class defining a subset of the source entity’s attributes. Projections enable a source to enforce the need-to-know principle by restricting the subset of attributes that should be and can be streamed to each sink.
In our use case, we assume that each of the two sinks is interested in, or authorized to access, a different subset of attributes. Hence, we define two respective projections and create two separate streams. With such a setup, all that the source developer is left with is as follows:
  • Marking the entity as a source entity;
  • Mapping at the level of that entity, each (output) stream to its corresponding projection.
On the other side, all that the sink developer needs to do is mark the sink entity as such and map it to the appropriate (input) stream. Figure 2 presents the corresponding code. Two annotations, here and there, respectively, create the magic and make all the underlying details transparent to the source and sink developers.

4.1.2. Second Use Case

In this second use case, we go beyond the materialized view pattern and keep the sink entity synchronized with the source entity. Here, the source microservice needs to publish user-defined events upon an entity’s record creation, update, or deletion, based on specific requirements. Then, on the sink side, the developer provides the logic to process the published events, depending on specific requirements. In Microhooks jargon, such events are called custom events. The entity whose record creation, update, or deletion triggers the custom event production logic is called a custom source. On the other side, the class that defines the custom event processing logic is called a custom sink. In practice, a custom sink is modeled as a singleton, such as a stateless service.
To produce a custom event, the developer conducts the following:
  • Marks the entity as a custom source;
  • Exposes to Microhooks a callback that returns the event to be published. That callback encapsulates the logic to generate the event, including any precondition verification;
  • Marks that callback with the appropriate annotation to specify the triggering operation, namely, the record creation, update, or deletion, as well as the stream or streams through which the event should be sent.
Then, Microhooks takes care of all the rest, that is, listening to the triggering operations, invoking the appropriate callback(s), obtaining the returned event(s), and streaming them through their respective streams. In the case of events triggered by an update operation, Microhooks sends a map to the callback containing all fields that have changed along with their respective previous values. In this way, the developer has a convenient means to determine what exactly has changed and react accordingly.
On the custom sink side, the developer conducts the following:
  • Marks a class, a singleton service in practice, as a custom sink;
  • Exposes to Microhooks a callback that expects an event as a parameter. That callback implements the event processing logic according to the business requirements;
  • Maps that callback to a specific stream through a simple annotation.
Then, Microhooks takes care of listening to incoming events and, based on their respective (input) streams, identifies the correct sink service object(s) along with their specific callback(s) to invoke. Figure 3 illustrates how the code looks like at the level of each microservice.

4.1.3. Third Use Case

In this third and last use case, we consider the processing of a normal end-user request by the source microservice. After receiving and verifying the request, the controller invokes a business method on an injected/wired service object. As part of its business logic, we assume that the method needs to publish two events to the two sink microservices, respectively. For such a need, Microhooks offers an easy-to-use, broker-agnostic EventProducer. The developer can simply do the following:
  • Inject an EventProducer instance in the service;
  • Use it from within the business method to publish events through the appropriate streams, as needed.
This is illustrated in Figure 4. As for the sink side, processing such events is no different from processing the custom events in the previous use case.
A full example is available on the Microhooks GitHub repository [13]. The source microservice uses Spring, while the two sink microservices use Micronaut and Quarkus, respectively. The broker is Kafka and the ORM framework is Hibernate.

4.2. Functional Requirements

Considering and elaborating on the presented use cases, we specify the detailed functional requirements. First, we identify two actors: the source microservice developer and the sink microservice developer. Then, we organize the functional requirements accordingly.

4.2.1. The Source Microservice Developer

  • Shall make any of his/her entities available as a source of truth, in a proactive manner, i.e., using push mode.
  • Shall accommodate different sinks with different materialized views.
  • Shall create custom events based on custom logic triggered by an entity record creation, update, or deletion.
  • Shall create custom events based on custom logic triggered as part of the end-user request processing flow.
  • Shall publish every custom event through one or more streams.

4.2.2. The Sink Microservice Developer

  • Shall create materialized views (sink entities) and map each to exactly one (input) stream.
  • Shall process custom events from input streams of interest.

4.3. Non-Functional Requirements

4.3.1. Usability

The microservice developer shall use Microhooks in a declarative way, i.e., by just decorating the concerned microservice artifacts (classes, methods, and fields) with simple and intuitive annotations. For common scenarios, e.g., implementing the materialized view pattern, no further user action shall be required. As for specific cases where custom business logic needs to be specified, the user shall only wrap such logic in a method and decorate it with the appropriate annotation. Microhooks shall decide when to call that method back on behalf of the user and how to process its return value. In rare cases where the user needs to take a forward action, as opposed to defining a callback, all the low-level details should remain transparent.

4.3.2. Performance

Microhooks shall fulfill its functional requirements without incurring a high cost on performance. The overhead caused by Microhooks shall remain acceptable.

4.3.3. Security

The need-to-know principle shall be enforced. More specifically, a sink service shall not receive more data than needed. For example, if it just needs a subset of the source entity’s attributes, then it shall not receive any additional attribute that is not in the needed subset. Therefore, the source shall restrict, for each source entity, the attributes it shares about its records on a per-stream basis.

4.3.4. Interoperability

Microhooks shall work for microservices using any container, any broker, and any ORM framework. Moreover, microservices interacting through Microhooks may be using different containers or ORM frameworks.

4.3.5. Portability

Beyond interoperability, users shall have one unified API, regardless of the container, the broker, and the ORM framework they use. Their code shall be portable from one Java environment to another.

4.4. Application Programming Interface—API

We could have deferred the API description until the next section on design, but we preferred to present it here as part of the user perspective. If one were interested in just using Microhooks without necessarily understanding its internals, s/he could read up to the end of this subsection and no further.
For user convenience, we organized the API around three packages: one for the source side, one for the sink side, and one that is common for both. We already introduced several API artifacts in the use case scenarios.

4.4.1. io.microhooks.source

This is the package for the source side. It offers the following artifacts:
  • Source: This annotation marks an entity as a source for the materialized view pattern. By doing so, Microhooks handles the propagation of record creation, update, and deletion events to the intended recipients (sinks). Instead of identifying those recipients, the user specifies the streams through which the events must be sent and the projection for each one. Therefore, the Source defines a required property, mappings, as an array of strings. Each string is in the following format: stream:projection, as shown in Listing 1.
  • Projection: This annotation is used to mark projection classes. It defines no properties. Its purpose is to facilitate the copying of an entity’s fields that have matching ones in the projection while ignoring the rest.
  • CustomSource: This annotation marks an entity to define custom logic that is executed upon the creation, update, or deletion of its records. It defines no property.
  • Track: This annotation goes hand in hand with the CustomSource annotation, especially when reacting to updates. It allows for marking fields whose changes must be tracked. Without such an annotation, Microhooks would need to track all entity fields for changes, which is sub-optimal.
  • ProduceEventOnCreate: This annotation marks a CustomSource entity’s method as a callback defining custom logic, to be executed whenever a record is created. It defines one required streams property that allows specifying the array of streams through which the produced event must be sent. The method decorated by this annotation shall take no parameters and must return an event.
  • ProduceEventsOnCreate: As opposed to ProduceEventOnCreate, this annotation marks a CustomSource entity’s method as a callback that returns several events upon record creation, each of which is to be sent through several streams. It defines no property. The callback method decorated by this annotation shall take no parameters and shall return a map whose key is an event to be sent and whose value is the list of corresponding streams.
  • ProduceEventOnUpdate: This annotation marks a CustomSource entity’s method as a callback defining custom logic, to be executed whenever a record is updated, and at least one of its fields marked with Track has changed. It defines one required streams property that allows specifying the array of streams through which the produced event must be sent. The callback method decorated by this annotation shall take a map of fields that have changed along with their corresponding old values and return an event. The map of changed fields is constructed by Microhooks and passed to the user code as a parameter. This is convenient to implement custom logic based on specific changes of specific fields and their combinations. The user has access to previous values of tracked fields through the passed map, as well as to the current values through the entity’s instance variables.
  • ProduceEventsOnUpdate: The same as ProduceEventOnUpdate, except that it defines no property, and it marks methods that are to return a map of events and the respective streams through which they must be sent.
  • ProduceEventOnDelete: This annotation marks a CustomSource entity’s method as a callback defining custom logic, to be executed whenever a record is deleted. It defines one required streams property that allows specifying the array of streams through which the produced event must be sent. The method decorated by this annotation shall take no parameters and shall return an event.
  • ProduceEventsOnDelete: As opposed to ProduceEventOnDelete, this annotation marks a CustomSource entity’s method as a callback that returns several events upon record deletion, each of which is to be sent through several streams. It defines no property. The callback method decorated by this annotation shall take no parameters and shall return a map whose key is an event to be sent and whose value is the list of corresponding streams.
  • EventProducer: Microhooks publishes events returned from callbacks, on behalf of the user. However, if the user wants to explicitly publish events without concerning themselves with low-level details related to the broker and its API, the EventProducer is at the rescue. It is designed as a wrapper class around an underlying implementation that is dynamically loaded depending on the broker used.
Listing 1. Source—Usage Example.
Computers 14 00139 i001
Listing 2 illustrates the use of CustomSource, Track, and ProduceEventOnUpdate annotations.
Listing 2. CustomSource—Usage Example.
Computers 14 00139 i002

4.4.2. io.microhooks.sink

This is the package for the sink side. It offers the following artifacts:
  • Sink: This annotation marks an entity as a sink for the materialized view pattern. By doing so, Microhooks handles listening to related incoming events for record creation, update, and deletion. Sink defines a required property, a stream, as a string specifying the stream from which such events must be received.
  • CustomSink: This annotation allows marking a sink-side class, in practice, a singleton service, as a component exposing custom logic to process incoming events from the source side. It defines no property.
  • ProcessEvent: This annotation goes hand in hand with CustomSink. It allows marking on a CustomSink class, the methods that must be called back based on incoming events. It defines two properties. The first one is the input stream that shall be listened to, and the second is the label of received events for filtering purposes. The annotated method shall define two parameters—the first representing the event ID or a key of type ‘long’, and the second representing the event itself. The method is not supposed to return anything.

4.4.3. io.microhooks.common

This is the package that defines the artifacts that are common to both sides. For now, there is one class defining the blueprint for events. It is modeled as a generic class, encapsulating the following attributes:
  • Payload: This is of a generic type, specialized by the user.
  • Label: This is a string used to characterize the event. Several events can have the same label value.
  • Timestamp: This is a long value representing the number of milliseconds since 1 January 1970, 00:00:00 GMT. It is automatically generated upon the event creation.
As for the event ID, it is not modeled as part of the event itself to avoid redundancy, as brokers provide APIs that support sending key/value pairs. Therefore, the event ID is sent and received as the key, and the event as the value.
Table 1 summarizes Microhooks API.

5. Microhooks Design

5.1. High-Level Design

When designing Microhooks architecture, we have been mainly guided by our specified usability and interoperability requirements. To this end, we came up with a modular and layered design, as shown in Figure 5. At the highest layer, we have Microhooks’ core implementation exposed to microservices through the already presented Microhooks API. Not only did we succeed in making the API completely independent from containers, brokers, and ORM frameworks, but we also imposed the same constraint on ourselves regarding the core implementation. Indeed, we pushed the few integration points down to the lower layer where we had container and broker extensions. As for the ORM frameworks, we did not need any particular extension thanks to the unified and standard persistence API, JPA, implemented by all popular ORM frameworks.
Of equal importance, two interacting microservices are not obliged to use the same container or the same ORM framework to adopt Microhooks. As long as they are connected through the same broker, they can use the same or different containers and ORM frameworks, depending on their requirements and preferences. This is illustrated in Figure 6. The upper scenario illustrates three microservices using Spring, Micronaut, and Quarkus, respectively, communicating through Kafka. The lower scenario shows other microservices using the same containers, respectively, communicating through RabbitMQ.
The architecture provides a structured but static view of our framework. Here, we describe the flow and dynamics of Microhooks while leveraging the underlying container, broker, and ORM framework. Figure 7 shows the high-level steps, respectively, taken by the source and sink microservices in order to fulfill Microhooks functional requirements:
  • Step 1: On both sides, Microhooks hooks itself with the container during startup. Moreover, on the source side, Microhooks hooks itself with the ORM framework. This hooking is detailed in the design and implementation.
  • Step 2: Once the startup finishes, the container calls Microhooks back (as it was already hooked).
  • Step 3: This gives it the opportunity to load the pre-built context from disk to memory on both sides. Indeed, the context is constructed at build time. This will be elaborated on in the implementation subsection.
  • Steps 4, 5: At the sink side, Microhooks retrieves all input streams from the context, and subscribes to them with the broker via the corresponding extension and library.
  • Step 6: Once there is a record creation, update, or deletion of a source (or a custom source) entity, the ORM framework calls Microhooks back through JPA.
  • Step 7: This gives it the opportunity to look up, in the context, the method(s) marked with the ProduceEventOnUpdate annotation, invoke them in the case of a custom source, and determine the output stream(s) to use.
  • Step 8: Microhooks uses the dynamically loaded broker extension and underlying library to send the generated event(s) through the identified streams.
  • Steps 9 and 10: The library at the source actually sends the event(s), and its peer at the sink receives them.
  • Step 11: It delivers them to Microhooks (remember, it has already subscribed with the broker in step 5).
  • Step 12: Depending on the nature of the event (CRUD or custom), Microhooks looks up in the context of the concerned sink entity(ies) or custom sink object(s).
  • Step 13: Finally, it uses the ORM framework through JPA to perform the necessary data operation, or it invokes the appropriate method(s), marked with ProcessEvent annotation, on the identified sink object(s).

5.2. Detailed Design

Figure 8 shows the class diagram of Microhooks’ core (without the API part), as well as Spring and Kafka extensions. For other containers and brokers, corresponding extensions follow the exact same spirit and define similar and parallel classes. So, the reader can easily extrapolate from those.
The Context class is at the heart of our design. At build time, it captures all microservice metadata as specified by the developer through the Microhooks API. At runtime, the context makes the metadata accessible in O(1). We will elaborate more on the interaction between the build time and runtime in the implementation section. Going back to the metadata, it is made of the list of source and sink entities, streams mapped to each, projections to apply, custom methods to invoke, when, etc.
The ApplicationBootstrap class is the entry point to Microhooks from the container perspective. Once the microservice starts up, the ApplicationBootstrap loads the context. It also instantiates and launches an EventConsumer if, and only if, the microservice has one or more sink entities and/or one or more sink services. The ApplicationBootstrap itself is loaded by the container. But for this to happen, it needs to be hooked with it. This is the purpose of SpringApplicationBootstrap, or any parallel class in any extension for any other broker. It extends ApplicationBootstrap and exposes its functionality to the container by adhering to the latter’s contract, such as implementing a specific interface or exposing a callback method decorated with a specific annotation.
The SourceListener and CustomSourceLitener classes represent Microhooks entry points from the ORM framework perspective. They define the logic to react to a source or custom source entity’s record creation, update, or deletion and expose them as callbacks to the ORM framework. For this exposure to actually take place, these two classes are added to the entity as JPA listeners. More details on how this is conducted are given in the implementation subsection. And as we only take advantage of—and commit to—JPA, we do not need any extension for the ORM framework, as opposed to the container and the broker.
EventProducer and EventConsumer provide an abstraction over the broker APIs. Together with the extension for a given broker, they follow the Adapter design pattern. The EventProducer, in reality, its extension, e.g., KafkaEventProducer, is used by SourceListener and CustomSourceListener when they have events to send to the sink side. There, EventConsumer, or rather its extension, e.g., KafkaEventConsumer, receives it, and if it needs to access a sink entity, it uses SinkRepository, or rather its extension, to support container-managed transactions.
The detailed flows relative to four different scenarios (CRUD/Custom, Source/Sink) are given in the sequence diagrams; Figure 9, Figure 10, Figure 11 and Figure 12. They show that—on both sides—once the Microhooks ApplicationBootstrap class is loaded and instantiated by the container, it loads the pre-built context and makes it available to the other components. On the sink side, the EventConsumer reads the context to determine all the streams that it should listen to and, hence, subscribe to.
When there is a CRUD event on the source side, the ORM framework invokes the registered SourceListener, which identifies the streams and corresponding projections to use from the context, generates the corresponding events, and uses the EventProducer to send them. On the sink side, the EventConsumer receives the event, looks up the concerned entity(ies) from the context, and acts on it/them accordingly.
When there is a custom event on the source side, the ORM framework invokes the registered CustomSourceListener, which uses the context to identify the custom method and corresponding stream. Then, it invokes the method, retrieves the result (the custom event), and uses the EventProducer to send it. On the sink side, the EventConsumer receives the event from the broker, determines from the context the sink object and method to invoke, and proceeds accordingly.

6. Microhooks Implementation

6.1. Leveraging the Build Time

Our framework relies on code introspection, instrumentation, and reflection. We use introspection to identify the users’ needs as expressed through Microhooks’ API annotations. Based on these annotations, we build the context and instrument the code. The aim of such instrumentation is twofold: on the one hand, it involves integrating (hooking) Microhooks with its environment (container, broker, ORM framework), on the other hand, it involves supporting its internal implementation. Finally, we use reflection to invoke methods and access fields that obviously are unknown at compile time.
With this background, we made the strategic decision to maximize the use of build time. We strove to perform zero code introspection, zero context building, and zero code instrumentation at runtime by completely moving them to the build time. To this end, we harnessed the excellent Byte Buddy library, which offers the possibility to perform both code introspection and instrumentation at build time (in addition to runtime) [30]. Byte Buddy implements a Gradle plugin, which takes our Builder as a transformation as shown in Figure 13.

6.2. Code Instrumentation and Context Building

Here, we describe the two main tasks of our Builder, namely code instrumentation and context building. The effects of code instrumentation are persisted in the bytecode class, while context building yields maps that we serialize and save. At runtime, we load them for fast, O(1) access. The Builder starts by introspecting the user code, looking for classes decorated with one of the four main annotations of the Microhooks API: Source, Sink, CustomSource, and CustomSink.

6.2.1. Source-Annotated Class

When such a class is found, the Builder instruments it so that at runtime, it is registered with the ORM framework. To this end, we decorate the class with the EntityListeners JPA annotation, customized with our io.microhooks.internal.SourceListener. The latter exposes the entity’s record creation, update, and deletion callbacks to the ORM framework. The Builder also instruments the class to make it implement our io.microhooks.internal.Trackable interface. This interface, along with its generated implementation, allows for tracking changes to the entity’s fields during a record update operation without using reflection. This is what allows us to verify whether an update event should be sent through a stream, given its mapped projection. In this way, we only send updates through streams whose respective projections contain at least one changed field.
Listings 3 and 4 show a sample source entity, before and after instrumentation, respectively.
Listing 3. Source Entity—Before Instrumentation.
Computers 14 00139 i003
Listing 4. Source Entity—After Instrumentation.
Computers 14 00139 i004
In terms of context building, the Builder introspects the class to determine the name of the field that represents the entity’s ID, and maps it to the class name. At runtime, this replaces linear search with instant access. Then, by parsing the Source annotation, it maps each output stream to its corresponding projection for this class. The Builder also extracts the list of all fields defined in projection classes. This list represents the fields whose updates need to be kept track of at runtime.

6.2.2. CustomSource-Annotated Class

When such a class is found, the Builder instruments it so that, at runtime, it is registered with the ORM framework. To this end, and similar to a source-annotated class, we decorate it with the EntityListeners JPA annotation, customized this time with our io.microhooks.internal.CustomSourceListener. The latter exposes the entity’s record creation, update, and deletion callbacks to the ORM framework. However, instead of implementing common logic for the materialized view pattern like in SourceListener, it invokes the callbacks exposed by the user through the ProduceEventOn/ProduceEventsOn[Create, Update, Delete]. The Builder also instruments the class to make it implement our io.microhooks.internal.Trackable interface. As previously mentioned, this interface, along with its generated implementation, allows for tracking the changes in the entity’s fields during a record update operation, without using reflection. When Microhooks invokes an update callback exposed by the user, it passes a map containing the fields that have changed along with their respective values before the change. This allows the user to determine exactly which fields have changed and how, enabling them to take a corresponding action.
In terms of context building, the Builder introspects the class to determine the name of the field that represents the entity’s ID. Then, it looks up and indexes all methods annotated with ProduceEventOn/ProduceEventsOn[Create, Update, Delete] for instant access at runtime. The Builder also extracts the list of all fields marked with the io.microhooks.source.Track annotation (cf. Microhooks API), representing the fields whose updates need to be kept track of at runtime.

6.2.3. Sink-Annotated Class

Any sink entity’s record needs to reference its corresponding source entity’s record. To this end, the Builder enhances the sink entity by adding a microhooksSourceId field and implementing the out io.microhooks.internal.Sinkable interface, to allow for setting and retrieving such a field at runtime without reflection.
Listings 5 and 6 show a sample sink entity, before and after instrumentation, respectively.
Listing 5. Sink Entity—Before Instrumentation.
Computers 14 00139 i005
Listing 6. Sink Entity—After Instrumentation.
Computers 14 00139 i006
As for context building, the Builder introspects the Sink annotation to extract the specified stream and map it to the sink entity. The aim is to build an “inverted” map of streams as keys, and for each stream, the list of mapped sink entity classes. At runtime, this map is used to determine the list of input streams that the EventConsumer needs to subscribe (with the broker) and to instantly identify the sink entity(ies) associated with events received through each stream.

6.2.4. CustomSink-Annotated Class

As already mentioned, a CustomSink class would be a singleton service in practice. When instantiated by the container, or directly by the user, the object shall register itself with the context while being mapped to the stream specified by the user. In this way, when a relevant event is received, Microhooks can call the CustomSink object back (on the ProcessEvent-annotated method). For this to happen at runtime, the Builder instruments all the constructors of a CustomSink-annotated class to add such a registration logic. The Builder also extracts and maps pertinent information for context building. The corresponding flow and dynamics between build time and runtime are summarized in Figure 14.

6.3. Project Organization and How to

To keep control over Microhooks’ continual growth and development, we organized it around several focused and loosely coupled projects. We adopted Google Gradle as our build automation tool of choice since it is, along with Apache Maven, a de facto standard in the field. As shown in Figure 15, we have one core Gradle project extended by the following:
  • Three container-specific Gradle projects: Microhooks-Spring, Microhooks-Micronaut, and Microhooks-Quarkus.
  • Three broker-specific Gradle projects: Microhooks-Kafka, Microhooks-RabbitMQ, and Microhooks-RocketMQ.
Hence, support for other containers and brokers is straightforward and has no side effects on the existing codebase.
From the user’s perspective, all they need to know about at development time is the Microhooks API; everything else remains completely transparent. At build time, they simply need to add the following:
  • Microhooks-Builder as a Gradle buildscript dependency.
  • Byte Buddy Gradle plugin, with a configured transformation pointing to Microhooks-Builder entry class.
  • Microhooks-Core as an implementation dependency.
  • The two specific extensions for their adopted container and broker, respectively, as implementation dependencies.

7. Microhooks Evaluation

Our functional requirements, as well as usability, performance, security, interoperability, and portability requirements, have been well specified and taken into account through the design and implementation of Microhooks. Moreover, all these requirements, except performance, which will be further elaborated, have been successfully tested through concretely implemented examples. These are available on the Microhooks GitHub repository [13] and demonstrate how Microhooks has been successfully used among microservices that needed to implement the materialized view pattern, as well as firing and handling custom events (functional requirements). This use has been done through a unified (portability requirement) and declarative (usability requirement) API, while the microservices were running under different containers (Spring, Micronaut, Quarkus), each time with a different broker (Kafka, RabbitMQ) option (interoperability requirement). Porting Microhooks to other containers and brokers leverages the same core code and only requires Microhooks developers to write small extensions for those containers and brokers. Migrating Microhooks-based microservices to these containers and brokers only involves adding the respective extensions as dependencies in their build file, such as build.gradle. Finally, sources decide on the exact projections of their source entities to share with the sink microservices (security requirement).
Yet, in terms of usability, we wanted to quantify the reduction in code size and complexity achieved through Microhooks. And although we supported performance by moving all code introspection and instrumentation, as well as context building to build time, we needed to test its impact empirically. To this end, we developed an application with two microservices, namely, a Spring-based source and a Micronaut-based sink. This application supports the materialized view pattern, as well as custom events, through two implementations, namely, a raw implementation (without Microhooks) and a second one using Microhooks. They are available on the Microhooks GitHub repository [13]. In the following subsections, we compare these implementations in terms of code complexity and performance respectively.

7.1. Code Enhancement Quantification

We quantitatively evaluated the code improvement gain from Microhooks. To this end, we leveraged SonarQube, a popular open-source tool for static code analysis that helps developers identify and fix issues in their code [31]. SonarQube provides a range of features and capabilities for analyzing code quality, security, and compliance. Our code analysis focused on several key metrics: the number of lines of code, the number of classes, the cyclomatic complexity, and the cognitive complexity. In particular, the cyclomatic complexity and cognitive complexity metrics allow identifying code areas that might be difficult to understand or test, as well as opportunities for improving code readability and maintainability. Overall, these metrics provided us with valuable insights into the structure and complexity of the two codebases, and unraveled the improvement and refactoring gained from Microhooks.
Table 2 shows the code enhancement results for the source side and sink side, respectively. The gains from using Microhooks are highly promising, with higher enhancements for the sink side. This can be explained by the relatively more tedious and boilerplate code needed to register for incoming streams, deserialize events, look up the appropriate entity records in the database (e.g., for update operations), add a constructor to custom sink services, and register their instances for later callbacks when corresponding events are received. Yet, one may wonder what these numbers would look like for other applications. We deliberately considered the simplest possible application whose focus was the implementation of the materialized view pattern and custom events. So, the obtained results can be generalized for other applications, as far as the implementation of these two aspects is concerned, as opposed to the whole application code.
It is also worth mentioning that the raw implementation on both sides is container- and broker-specific. Porting it to other platforms requires rewriting several parts of the code. However, porting the Microhooks-based implementation to other platforms only requires changing the container and broker extension dependencies of Microhooks.

7.2. Qualitative Comparison with Other Frameworks

Microhooks offers a unique solution to a problem that has not been addressed by other frameworks. This makes it difficult to benchmark against existing solutions, as they do not tackle the core issue that Microhooks solves, namely, complex data replication and event management. Comparing Microhooks to frameworks like Spring Cloud would result in similar outcomes as the raw approach.
The closest framework to Microhooks is Eventuate Tram. While Eventuate provides some tools to help implement the materialized view pattern for optimizing read performance, the key difference lies in the level of abstraction that Microhooks offers. When using Eventuate Tram to implement the materialized view pattern in the microservices architecture, developers still have to handle several key aspects manually. First, they must define event types and create the necessary event publishing and subscribing mechanisms. This requires writing code to persist events to a message broker, as well as creating handlers that listen for and process those events. Additionally, developers must set up projections, or read-only views of the data, by explicitly writing the logic to transform and update the read models based on incoming events. This typically involves creating dedicated classes for each projection and configuring them to listen to specific event streams. As the number of events, read models, and microservices increases, the amount of boilerplate code grows significantly. A raw implementation of the materialized view pattern would involve even more work, as developers would need to implement event sourcing, messaging, and data synchronization mechanisms from scratch. Eventuate simplifies some of this by offering abstractions like event sourcing and support for event-based projections, but developers are still responsible for managing event handling, projection updates, and communication between services.
In contrast, Microhooks abstracts away these concerns even further with a declarative, annotation-based API that automatically handles much of the configuration, greatly reducing the need for manual wiring and boilerplate code. This cuts down the number of classes and lines of code and significantly lowers cognitive and cyclomatic complexity.

7.3. Performance Evaluation

Let us start by analyzing Microhooks’ impact on performance from a user perspective, i.e., in terms of response time. Here, we distinguish two main types of user requests, namely, commands to source microservices and queries to sink microservices. It is clear that queries can only be enhanced through the materialized view design pattern, since the requested service has the queried data right in its own database, as already replicated from the source/command service. Without such replication, it would need to retrieve the data remotely from the command side while processing the user’s request. Now, let us analyze what happens in the case of commands, taking into account how Microhooks is implemented. When a command is issued, the requested microservice processes it, persists the data into the database, and then replies back to the user. When data are persisted, the ORM framework invokes Microhooks through the callback methods of the exposed JPA listeners. However, these invocations are performed on a thread different from the one dedicated to processing the user’s request. Moreover, streaming events to sinks through the underlying broker is done asynchronously. So, commands are not affected by Microhooks overhead, thanks to multi-threading. We confirmed this empirically.
Yet, we needed to know how, in those separate threads, the handling of events on each side performs when using Microhooks versus not using it (raw implementation). In theory, this is where instantaneous access to pre-built and indexed context data would help. But we had to evaluate this empirically and quantitatively. To that end, we used the open-source JMeter performance metering tool [32]. The results are summarized in Table 3 and Table 4. They show the average handling times of two types of events, that is, a record creation event and a custom event on a record update, respectively. For each type of event, we measured performance on the source side and the sink side. On the source side, we varied the number of concurrent users (from 25 up to 200), as well as the number of requests per user (from 20,000 down to 2500), to put the tested applications under different loads to see how they scaled accordingly. On the sink side, since events are consumed from the broker in a serial fashion, no matter how concurrently they are produced on the source side, we considered a large number of events (500,000) and measured their average processing time. In every test, whatever the type or side of the event, we repeated the measurement several times, and we reported the intervals of obtained measurements for more faithful results. These results show that the Microhooks overhead is almost null. Hence, our expectations regarding the Microhooks performance were confirmed and even exceeded.

8. Conclusions and Future Work

In this article, we strove to address the real-life need to streamline the development of microservices, through the transparent management of data distribution and replication, as well as the abstracted production, consumption, and processing of business-driven events. Our solution consisted of a novel framework, called Microhooks, based on well-defined functional requirements, as well as usability, performance, security, interoperability, and portability requirements. We fulfilled all such requirements thanks to a smart design and an optimized implementation, leveraging patterns, best practices, and the best-of-bread software libraries. We exposed a unified, container-agnostic, broker-neutral, and ORM framework-independent API to microservice developers. Backed by specific extensions and the adapter design pattern, even the core implementation of Microhooks is platform-independent (e.g., container, broker, ORM framework). In terms of implementation, we made the strategic choice to move all code introspection and instrumentation, as well as context building, to the build time. In this way, at runtime, Microhooks enjoyed O(1) access to prebuilt and indexed context data, in order to instantaneously map streams and events to entities, objects, and methods.
The effectiveness and efficiency of Microhooks have been thoroughly evaluated. The results are highly promising and confirm or exceed our expectations. In particular, Microhooks allows reducing the size and complexity of the code drastically, without any side effects on performance. Therefore, we consider Microhooks to be production-ready and we highly recommend it to microservice developers.
As a future direction, we plan to capitalize on the spirit and design of Microhooks to port it from Java to other popular languages, such as Python, C#/.NET, and JavaScript/TypeScript. We would also like to extend the functional requirements of Microhooks while preserving the same commitment toward its non-functional requirements. These functional requirements include the transparent management of distributed transactions, which we have already started, based on the two-phase commit protocol. They also include support for fault analysis and debugging. Finally, we are in the process of incubating Microhooks as an Apache Project. To this end, a podling proposal has been prepared and is ready for submission to the Apache Incubator.

Author Contributions

Conceptualization, O.I.; methodology, O.I. and M.E.K.E.H.; software, O.I.; validation, O.I.; investigation, M.E.K.E.H. and A.Z.; resources, M.E.K.E.H. and A.Z.; writing—original draft preparation, M.E.K.E.H. and A.Z.; writing—review and editing, O.I.; supervision, O.I. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The source code of Microhooks, as well as usage examples, can be found in this GitHub repository: https://github.com/oiraqi/microhooks (accessed on 1 February 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
APIapplication programming interface
CAPconsistency, availability, and partitioning
CRUDcreate, retrieve, update, delete
DDDdomain-driven design
JPAJava/Jakarta Persistence API
ORMobject-relational mapping

References

  1. Chen, L. Microservices: Architecting for Continuous Delivery and DevOps. In Proceedings of the ICSA. IEEE Computer Society, Seattle, WA, USA, 30 April–4 May 2018; pp. 39–46. Available online: http://dblp.uni-trier.de/db/conf/icsa/icsa2018.html (accessed on 1 February 2025).
  2. Waseem, M.; Liang, P.; Shahin, M. A Systematic Mapping Study on Microservices Architecture in DevOps. J. Syst. Softw. 2020, 170, 110798. [Google Scholar] [CrossRef]
  3. Ponce, F.; Márquez, G.; Astudillo, H. Migrating from monolithic architecture to microservices: A Rapid Review. In Proceedings of the 2019 38th International Conference of the Chilean Computer Science Society (SCCC), Concepcion, Chile, 4–9 November 2019; pp. 1–7. [Google Scholar] [CrossRef]
  4. Lauretis, L.D. From Monolithic Architecture to Microservices Architecture. In Proceedings of the ISSRE Workshops; Wolter, K., Schieferdecker, I., Gallina, B., Cukier, M., Natella, R., Ivaki, N., Laranjeiro, N., Eds.; IEEE: Piscataway, NJ, USA, 2019; pp. 93–96. Available online: http://dblp.uni-trier.de/db/conf/issre/issre2019w.html (accessed on 1 February 2025).
  5. Abgaz, Y.; McCarren, A.; Elger, P.; Solan, D.; Lapuz, N.; Bivol, M.; Jackson, G.; Yilmaz, M.; Buckley, J.; Clarke, P. Decomposition of Monolith Applications Into Microservices Architectures: A Systematic Review. IEEE Trans. Softw. Eng. 2023, 49, 4213–4242. [Google Scholar] [CrossRef]
  6. Al-Debagy, O.; Martinek, P. A Comparative Review of Microservices and Monolithic Architectures. In Proceedings of the 2018 IEEE 18th International Symposium on Computational Intelligence and Informatics (CINTI), Budapest, Hungary, 21–22 November 2018; pp. 000149–000154. [Google Scholar] [CrossRef]
  7. Ramírez, F.; Mera-Gómez, C.; Bahsoon, R.; Zhang, Y. An Empirical Study on Microservice Software Development. In Proceedings of the 2021 IEEE/ACM Joint 9th International Workshop on Software Engineering for Systems-of-Systems and 15th Workshop on Distributed Software Development, Software Ecosystems and Systems-of-Systems (SESoS/WDES), Madrid, Spain, 3 June 2021; pp. 16–23. [Google Scholar] [CrossRef]
  8. Li, S.; Zhang, H.; Jia, Z.; Zhong, C.; Zhang, C.; Shan, Z.; Shen, J.; Babar, M.A. Understanding and addressing quality attributes of microservices architecture: A Systematic literature review. Inf. Softw. Technol. 2021, 131, 106449. [Google Scholar] [CrossRef]
  9. Zhou, X.; Peng, X.; Xie, T.; Sun, J.; Ji, C.; Li, W.; Ding, D. Fault Analysis and Debugging of Microservice Systems: Industrial Survey, Benchmark System, and Empirical Study. IEEE Trans. Softw. Eng. 2021, 47, 243–260. [Google Scholar] [CrossRef]
  10. Spring Cloud. Available online: https://spring.io/projects/spring-cloud (accessed on 1 February 2025).
  11. Why You Can’t Talk About Microservices Without Mentioning Netflix. 2015. Available online: https://smartbear.com/blog/develop/why-you-cant-talkabout-microservices-without-ment/ (accessed on 1 February 2025).
  12. Netflix Open Source Software Center. Available online: https://netflix.github.io/ (accessed on 1 February 2025).
  13. Microhooks: Code and Examples. Available online: https://github.com/oiraqi/microhooks (accessed on 1 February 2025).
  14. Vernon, V. Domain-Driven Design Distilled; Addison-Wesley: Boston, MA, USA, 2016. [Google Scholar]
  15. Evans, E. Domain-Driven Design Reference; Dog Ear Publishing: Indianapolis, IN, USA, 2014; Available online: http://domainlanguage.com/ddd/reference/ (accessed on 1 February 2025).
  16. Database-per-Service Pattern. Available online: https://docs.aws.amazon.com/prescriptive-guidance/latest/modernization-data-persistence/database-per-service.html (accessed on 1 February 2025).
  17. Fekete, A. CAP Theorem. In Encyclopedia of Database Systems, 2nd ed.; Liu, L., Özsu, M.T., Eds.; Springer: Berlin/Heidelberg, Germany, 2018; Available online: http://dblp.uni-trier.de/db/reference/db/c2.html (accessed on 1 February 2025).
  18. Challenges and Solutions for Distributed Data Management. Available online: https://learn.microsoft.com/en-us/dotnet/architecture/microservices/architect-microservice-container-applications/distributed-data-management (accessed on 1 February 2025).
  19. Maddodi, G.; Jansen, S. Responsive Software Architecture Patterns for Workload Variations: A Case-study in a CQRS-based Enterprise Application. In Proceedings of the BENEVOL; CEUR Workshop Proceedings; Demeyer, S., Parsai, A., Laghari, G., van Bladel, B., Eds.; CEUR-WS.org: Aachen, Germany, 2017; Volume 2047, p. 30. Available online: http://dblp.uni-trier.de/db/conf/benevol/benevol2017.html (accessed on 1 February 2025).
  20. Laigner, R.; Zhou, Y.; Salles, M.A.V. A Distributed Database System for Event-Based Microservices. In Proceedings of the 15th ACM International Conference on Distributed and Event-Based Systems, DEBS ’21, New York, NY, USA, 11–15 July 2021; pp. 25–30. [Google Scholar] [CrossRef]
  21. Soldani, J.; Tamburri, D.A.; van den Heuvel, W.J. The pains and gains of microservices: A Systematic grey literature review. J. Syst. Softw. 2018, 146, 215–232. Available online: http://dblp.uni-trier.de/db/journals/jss/jss146.html (accessed on 1 February 2025). [CrossRef]
  22. Laigner, R.; Zhou, Y.; Salles, M.A.V.; Liu, Y.; Kalinowski, M. Data Management in Microservices: State of the Practice, Challenges, and Research Directions. Proc. VLDB Endow. 2021, 14, 3348–3361. [Google Scholar] [CrossRef]
  23. Eventuate Tram. Available online: https://eventuate.io/docs/manual/eventuate-tram/latest/about-eventuate-tram.html (accessed on 1 February 2025).
  24. Jakarta Persistence. Available online: https://jakarta.ee/specifications/persistence/3.0/jakarta-persistence-spec-3.0.pdf (accessed on 1 February 2025).
  25. Spring Framework. Available online: https://spring.io (accessed on 1 February 2025).
  26. Micronaut Framework. Available online: https://micronaut.io (accessed on 1 February 2025).
  27. Quarkus Framework. Available online: https://quarkus.io (accessed on 1 February 2025).
  28. Hibernate ORM Framework. Available online: https://hibernate.org (accessed on 1 February 2025).
  29. EclipseLink ORM Framework. Available online: https://projects.eclipse.org/projects/ee4j.eclipselink (accessed on 1 February 2025).
  30. Byte Buddy: Code Generation and Manipulation Library for Java. Available online: https://bytebuddy.net (accessed on 1 February 2025).
  31. SonarQube: Code Quality and Security Analysis Tool. Available online: https://www.sonarsource.com/products/sonarqube/ (accessed on 1 February 2025).
  32. JMeter: Functional Behavior and Performance Metering Tool. Available online: https://jmeter.apache.org/ (accessed on 1 February 2025).
Figure 1. Illustrative example—one source, two sinks.
Figure 1. Illustrative example—one source, two sinks.
Computers 14 00139 g001
Figure 2. First use case—code excerpt.
Figure 2. First use case—code excerpt.
Computers 14 00139 g002
Figure 3. Second use case—code excerpt.
Figure 3. Second use case—code excerpt.
Computers 14 00139 g003
Figure 4. Third use case—code excerpt.
Figure 4. Third use case—code excerpt.
Computers 14 00139 g004
Figure 5. Microhooks modular and layered architecture.
Figure 5. Microhooks modular and layered architecture.
Computers 14 00139 g005
Figure 6. Containers, brokers, and ORM framework integration—sample deployment scenarios.
Figure 6. Containers, brokers, and ORM framework integration—sample deployment scenarios.
Computers 14 00139 g006
Figure 7. Microhooks flow and dynamics.
Figure 7. Microhooks flow and dynamics.
Computers 14 00139 g007
Figure 8. Microhooks class diagram.
Figure 8. Microhooks class diagram.
Computers 14 00139 g008
Figure 9. CRUD event sequence diagram—source side.
Figure 9. CRUD event sequence diagram—source side.
Computers 14 00139 g009
Figure 10. CRUD event sequence diagram—sink side.
Figure 10. CRUD event sequence diagram—sink side.
Computers 14 00139 g010
Figure 11. Custom event sequence diagram—source side.
Figure 11. Custom event sequence diagram—source side.
Computers 14 00139 g011
Figure 12. Custom event sequence diagram—sink side.
Figure 12. Custom event sequence diagram—sink side.
Computers 14 00139 g012
Figure 13. Microhooks Builder.
Figure 13. Microhooks Builder.
Computers 14 00139 g013
Figure 14. CustomSink Journey from Build Time to Runtime.
Figure 14. CustomSink Journey from Build Time to Runtime.
Computers 14 00139 g014
Figure 15. Microhooks hierarchy.
Figure 15. Microhooks hierarchy.
Computers 14 00139 g015
Table 1. Microhooks API.
Table 1. Microhooks API.
ApplicabilitySource SideSink Side
Packageio.microhooks.sourceio.microhooks.sink
Fully-Automated CRUD Events
Class AnnotationsSourceSink
Projection
User-Defined Custom Events
Class AnnotationsCustomSourceCustomSink
Field AnnotationsTrack
Method AnnotationsProduceEventOnCreateProcessEvent
ProduceEventsOnCreate
ProduceEventOnUpdate
ProduceEventsOnUpdate
ProduceEventOnDelete
ProduceEventsDelete
ClassEventProducer
ApplicabilityCommon
Packageio.microhooks.common
ClassEvent
Table 2. Code enhancement analysis results.
Table 2. Code enhancement analysis results.
Source Side
MetricRawUsing MicrohooksEnhancement
Number of Lines2309558%
Number of Classes12558%
Cyclomatic Complexity22959%
Cognitive Complexity11373%
Sink Side
MetricRawUsing MicrohooksEnhancement
Number of Lines3888678%
Number of Classes13469%
Cyclomatic Complexity54689%
Cognitive Complexity33197%
Table 3. Record creation event.
Table 3. Record creation event.
Source Side
UsersReqs/UserAverage Handling Time (µs)Overhead
RawUsing Microhooks
2520,000[20–23][20–23]<0.5%
5010,000[21–24][21–24]<0.5%
1005000[21–24][21–24]<0.5%
2002500[22–25][22–25]<0.5%
Sink Side
Processed EventsAverage Handling Time (µs)Overhead
RawUsing Microhooks
500,000[235–385][235–385]<0.5%
Table 4. Custom event on a record update.
Table 4. Custom event on a record update.
Source Side
UsersReqs/UserAverage Handling Time (µs)Overhead
RawUsing Microhooks
2520,000[19–23][19–23]<0.5%
5010,000[21–27][21–27]<0.5%
1005000[21–27][21–27]<0.5%
2002500[21–27][21–27]<0.5%
Sink Side
Processed EventsAverage Handling Time (µs)Overhead
RawUsing Microhooks
500,000[14–19][14–19]<0.5%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Iraqi, O.; El Kadiri El Hassani, M.; Zouine, A. Microhooks: A Novel Framework to Streamline the Development of Microservices. Computers 2025, 14, 139. https://doi.org/10.3390/computers14040139

AMA Style

Iraqi O, El Kadiri El Hassani M, Zouine A. Microhooks: A Novel Framework to Streamline the Development of Microservices. Computers. 2025; 14(4):139. https://doi.org/10.3390/computers14040139

Chicago/Turabian Style

Iraqi, Omar, Mohamed El Kadiri El Hassani, and Anass Zouine. 2025. "Microhooks: A Novel Framework to Streamline the Development of Microservices" Computers 14, no. 4: 139. https://doi.org/10.3390/computers14040139

APA Style

Iraqi, O., El Kadiri El Hassani, M., & Zouine, A. (2025). Microhooks: A Novel Framework to Streamline the Development of Microservices. Computers, 14(4), 139. https://doi.org/10.3390/computers14040139

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop