Next Article in Journal
Energy Dissipation and Stress Equilibrium Behavior of Granite under Dynamic Impact
Previous Article in Journal
Agreement of the Discrepancy Index Obtained Using Digital and Manual Techniques—A Comparative Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

AMANDA: A Middleware for Automatic Migration between Different Database Paradigms

by
Jordan S. Queiroz
*,
Thiago A. Falcão 
,
Phillip M. Furtado
,
Fabrício L. Soares
,
Tafarel Brayan F. Souza
,
Pedro Vitor V. P. Cleis
,
Flavia S. Santos
and
Felipe T. Giuntini
*
Sidia R&D Institute, Manaus 69055-035, Brazil
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2022, 12(12), 6106; https://doi.org/10.3390/app12126106
Submission received: 20 April 2022 / Revised: 3 June 2022 / Accepted: 7 June 2022 / Published: 16 June 2022

Abstract

:
In a world rich in interconnected and complex data, the non-relational database paradigm can better handle large volumes of data at high speed with a scale-out architecture, which are two essential requirements for large industries and world-class applications. This article presents AMANDA, a flexible middleware for automatic migration between relational and non-relational databases based on a user-defined schema that offers support for multiple sources and target databases. We evaluate the performance of AMANDA by assessing the migration speed, query execution, query performance, and migration correctness, from two Relational Database Management Systems (RBMSs), i.e., Postgres and MySQL, to a non-relational database (NoSQL), i.e., DGpraph. The results show that AMANDA successfully migrates data 26 times faster than previous approaches, when considering Northwind. Regarding the IMDB database, it took 7 days to migrate 5.5 GB of data.

1. Introduction

The continuous change in the datafication process of weakly structured heterogeneous data makes it hard to carry out any analysis using manual and conventional methods [1,2]. Concurrently, new large datafication processes are occurring in different contexts, e.g., healthcare [3,4], software development and content management systems [5,6,7], Internet of Things (IoT) and smart cities [8,9,10], e-business [11], and social media [12,13,14,15]. Furthermore, new storage methods are constantly revisited and reassessed to support those processes at higher performance standards.
Such changes occur as part of a new and challenging research field: big data. In terms of characteristics, big data refers to a category of data that is often unstructured, assorted, has increasing volume, and follows ever-tighter time constraints, thus demanding every bit of computational power and processing speed available [3,16]. For example, the authors in [17] point out that Facebook users have uploaded more than 290 million photos and statuses per day, and the authors in [3] report that the healthcare data context is expected to reach 25,000 petabytes. Handling such a data volume imposes new storage scalability challenges for many organizations that have historically used relational databases to store, analyze, and process data [3]. To overcome such a challenge, new technologies in data storage are emerging to better scale and speed up data processing [18].
Database technologies are commonly classified into relational and non-relational (NoSQL) paradigms. The former requires a data schema, which provides interfaces for high-level, complex queries through formulas representing application-specific integrity constraints [19]. The latter is schema-agnostic [20], thus requiring no predefined data schema, alternatively providing high-performance queries on high volumes of unstructured data [21]. Examples of Relational Database Management Systems (RDBMS) include PostgreSQL [22], MySQL [23], and Oracle [24]. Common to them is the support for Structured Query Language (SQL) [25] and the presence of tables composed of rows and columns as a way of structuring data. In an RDBMS, each row is identified by a unique value called the primary key. Columns of a table represent attributes common to each row, explicitly defined through a data type that represents the intended use of the attribute data. Such constraints and their relationships are described in a database schema that must be defined prior to any database usage [5,18].
NoSQL databases are different from RDBMS in their underlying implementation. They make use of something other than tables to store data, and can be classified based on their data model [5]: key-value (e.g., memcached [26]), document (e.g., MongoDB [27]), column-oriented (e.g., Cassandra [28]), and graph (e.g., Neo4J [29]). When working with the current trending scenarios that often contain substantial arrangements of unstructured and uncoupled data, NoSQL databases can overcome some RDBMS limitations, namely: poor distributed data processing (scalability) and high latency and complexity (performance) [30]. In effect, in comparison to RDBMS, NoSQL databases are noteworthy for their operational scalability and performance when working with flexibly structured data [31,32,33].
Given that RDBMS and NoSQL present very different qualities and usage benefits, there is often a need to migrate from one type to the other, something that can be achieved through the use of data migration [5] tools. Graph databases, a type of NoSQL database, handle relationships more efficiently than RDBMS [5]. One of the reasons is that RDBMS computes relationships at the query time through join operations. In contrast, graph databases store data relationships as a data entity [18].
The major part of the previous studies present a focus on database metadata evaluation [34,35,36], schema crawling [18], and Entity Relational (ER) diagram conversion [37], thus not allowing users to select a database subset or table attributes to migrate. Other studies [21,38] depend on third-party tools, e.g., Kafka and Apache Phoenix, showing a tight dependency on them. Furthermore, the related research works can not be extended to support more than one source and target database. On the other hand, our method relies on the user input, e.g., the tables and attributes that will be migrated. With such data, the migration tool runs SELECT statements to obtain the data from the RDBMS, and then migrates the obtained data to Dgraph. Furthermore, new modules can be developed and integrated into the proposed migration tool to support migrations between other databases. Since SQL statements are run, our migration tool does not rely on third-party tools like the tools proposed by [21,38] do.
To the best of our knowledge, we have found no relational to non-relational database migration middleware that are flexible and based on direct queries. In addition, no research studies have been found on solutions working with more than a single source and target database. Furthermore, no tool that migrates data to Dgraph could be found, except for Dgraph’s built-in migration tool, although it migrates only from MySQL databases.
Hence, a migration middleware with support for additional RDBMS, may allow for a smoother transition out of a relational database. Examples of RDBMS are MySQL and Postgres. In this work, the examples will always refer to Postgres as the Relational Database Management System (RDMS) and Dgraph as the graph database, a type of NoSQL database.
Since data are a critical asset for companies that need to stay competitive in their respective markets, and since new technologies are emerging to handle such data, we defined the following research question:
  • How to provide automatic migration between different database paradigms, ensuring data reliability with good computational performance?
To answer the research question, this study presents AMANDA, a Middleware for Automatic MigrAtioN of different DAtabases. AMANDA is capable of migrating between more than one source and target database. We conducted an experimental evaluation of a migration from an RDBMS to an NoSQL graph-oriented database to evaluate the middleware.
We highlight the main contributions:
  • A flexible middleware to support different database paradigms migrations, considering specific definitions (i.e., tables and attributes), does not require the migrating of all of the databases, as in the previous works. In addition, the middleware is independent and extensible to other databases.
  • Users can choose what to migrate, since the migration is based on what tables and attributes users choose to migrate. With the values informed by the user, the tool runs SQL queries to obtain the data.
  • Evaluation performance experiments with real and public databases show a 100% of data migration success that is 26 times faster than previous tools and approaches.
This article is structured as follows: Section 2 presents and discusses existing related research work; Section 3 presents the proposed migration method; Section 4 presents the experiments and the method evaluation; Section 5 discusses the take outs; and, finally, Section 6 concludes this work.

2. Related Works

In this section, we enumerate and describe the various proposed solutions related to the task of migrating from relational to non-relational databases.
The authors in [34] proposed a new algorithm for relational to graph database conversion; more specifically, to Neo4j. A comparison was carried out between their solution, R2G [39], and RDB2Graph [40]. Such comparison entails nodes, edges, properties, and size. Their solution presented a smaller number of nodes and edges, with the same number of properties of RDB2Graph, fairly improved querying times in comparison to RDB2Graph, and slightly improved querying times in comparison to R2G.
Regarding the use of graph databases in big data and the conversion from relational to graph databases, the authors in [35] developed a tool to migrate databases from MySQL to Neo4j. Such a tool makes use of joins, represented by a foreign key relationship. Each cell in a table becomes an edge, which then connects to its column name and to one or a plurality of the other fields of its row. Then, the graph model is automatically generated with the aid of the table structure, reading the table data and finding distinct values. Such a conversion is performed through the creation of an edge in each row using the key value in the table and connecting it to other non-key values and labeling the edge with the attribute names. If the table has a compound key, then a hyper edge is created to connect the key domain values to other non-key domain values.
Another migration tool was proposed in [18] that migrates a relational (MySQL) to a graph database (Neo4J). Such a tool works in two stages: (I) it uses a SchemaCrawler to extract the table metadata and (II) it converts the extracted data to vertices and edges and imports them into Neo4J through its API. Moreover, the authors survey the best practices for converting data from relational to graph databases. This work use a non-publicly available data set in its experiments.
Using algorithms such as those presented in [39,41] as metrics, the authors in [42] demonstrated the usage of functional dependencies for graph database conversion. The authors have shown, through survey studies, that the algorithms have flaws, and, hence, the authors have proposed fixes for the algorithms. In comparison to the aforementioned algorithms, AMANDA has shown improvements when it comes to the required execution time for a database conversion, as well as a finer query answer correctness in comparison to the answers presented by relational databases. The datasets used in the paper are: (i) the Order store database, a relational database [41]; (ii) the NorthWind dataset, a subset of the Wikipedia-2008 database; and (iii) a subset of the IMDb database. The initial relational database was the SQL Server and the resulting graph database was Neo4j. The queries were based on the Wikipedia-2008 dataset.
Some algorithms have been proposed to automatically migrate relational to graph databases. The authors in [37] use an ER diagram as a key part of the migration process, providing information that will be passed on to edges and nodes of the graph. Upon evaluating the algorithm, the authors state that, for an ER diagram with n entities, m relationships, and p parent–child relationships, for  p < n , the time complexity is given by O ( n m + n p ) , equivalent to O ( n 2 ) . The method alleviates the influence of model disparities between relational and graph databases, establishing data integrity and constraint. In the experiments, an SQL database is migrated to a Neo4j database, and the NorthWind and IMDb datasets are used. The authors demonstrated that queries using multiple tables are more efficient in Neo4j than in the SQL Server.
The authors in [36] propose a migration algorithm from a relational database (MySQL) to a graph database (Neo4j). The algorithm is described in detail, showing how tables and join operations on the relational database are mapped into vertices and edges on the graph database. The migrated dataset used is Northwind. A query speed comparison between Neo4j and MySQL indicated that join-sensitive queries have a better performance on Neo4j than MySQL, and non-join-sensitive queries are better on MySQL than Neo4j. The research results have shown that polyglot persistence, a feature that uses more than one type of database and carefully chooses the best or most suitable, makes sure that the drawbacks from one database are scaled down by the benefit of another database, thus improving the performance efficiency.
Rules have also been proposed from non-relational to relational database migration. The authors in [11] suggest traditional rules to be used when data are converted from an Oracle database to a MongoDB one. The rules cover the following cases: association 1×N and N×N, composition, aggregation, and inheritance. The authors state that the rules support time-varying data and time series. Their work has not made any prototypes available nor has conducted any experiments.
The authors in [43] redesign a database management system in order to make it a hybrid database system; that is to say, one where both the SQL and NoSQL paradigms are able to operate seamlessly. To achieve this outcome, the authors use MySQL as a relational database and a Resource Description Framework (RDF) to store unstructured data, properly enabling graph queries. To make the SQL and NoSQL parts work together, the authors use the concept of ontology, following the pattern OWL (Web Ontology Language) and tools that support both the usage of SQL and RDF.
The authors in [38] propose a method to migrate data from MySQL to HBase (a column-oriented, non-relational database) by using Apache Phoenix—a relational database engine working as an SQL layer on top of HBase. Phoenix cannot handle complex queries, and, for this reason, the authors used some techniques to migrate the data, e.g., (1) query translation, where nested queries are rewritten into temporary tables; (2) data denormalization in order to avoid join operation among tables, thus improving the reading performance; and (3) a covered index in order to reduce the table access time. The authors also use Support Vector Machines (SVN) to classify a given query as having a better performance in a relational database or in an NoSQL one. The data set used in the experiment is the TPC-H, a benchmark published by the Transaction Processing Performance Council.
A migration tool that works with two approaches in parallel (migration of static data and real-time data) was proposed by [21]. Its supported databases include MySQL (as the source) and MongoDB (as the target). A snapshot of the whole table is built in a JSON for static data, and, for each table snapshot, a migration thread is run. On the other hand, real-time migration is handled by a non-interactive background process (daemon) that listens to the source database for changes and sends them to the real-time migration module. In such a migration tool, the experiments were conducted employing a non-publicly available data set.
Table 1 summarizes the related works. The related works use database metadata, formal rules to transform an ER diagram into a graph model, query translation, database snapshot, and stream techniques. Conversely, our work presents and queries the source database directly according to that defined in schema.json—a configuration file. Furthermore, the middleware is flexible in order to support more source and target databases. More details are provided in Section 3.

3. The Middleware

In this section, we introduce AMANDA, a Middleware for Automatic MigrAtioN of different DAtabases. AMANDA’s architecture is divided into three modules: core, SQL Connection Implementation, and Graph Implementation. The architecture is presented in Figure 1.
The first module, called core, comprises modules that read the schema.json file and provide connection to relational databases. We describe such modules in detail below:
  • Schema Provider: Obtains the tables and their properties from the file schema.json, which are then used during the querying process in the Relational Database Management System (RDBMS) to extract the data to be migrated. More details about schema.json are provided later in Listings 1 and 2.
  • SQL Connection: Provides the required methods used for querying and connecting to a source RDBMS.
  • GraphWriter: An abstraction of the Schema Provider and SQL Connection used to provide support for the target Graph Database, where the data will be migrated to.
Modules that connect to a specific RDBMS are part of the SQL Connection Implementation module. In this module, the user is expected to implement its own connection class (module) inherited from SQL Connection with the necessary methods. Moreover, the following methods are mandatory and can be implemented according to the needs: connect, query, and count_table_rows. In this work, an example of RDBMS is Postgres, and is used in the experiment as a source database.
Modules responsible for data migration from a source database to a target database are implemented as part of the Graph Implementation module. Such modules make use of the schema.json file and source database connection, provided by SchemaProvider and SQLConnection, respectively. These classes, e.g., DGraphRDFWriter, perform queries on the source database, thus obtaining data according to the schema.json file definition. The graph database (e.g., Dgraph) is expected to use its class, given that Dgraph has its way of writing and reading data. In this work, Dgraph is employed as the target database.
The framework output consists of files representing the data to be migrated. The built-in tools of the graph databases use such files as an input, e.g., Dgraph receives RDF files as the input used to populate the database. In summary, AMANDA reads the tables and attributes defined in a vertex array in the file schema.json, as shown in Listing 1. After that, through the defined tables and attributes, the data are fetched from the source database (Figure 2A) through SQL queries, and the result is converted into nodes in a graph (Figure 2B). After the nodes are defined, AMANDA reads the edges section in the schema.json, as shown in Listing 2, and uses the primary and foreign key to define the edges (Figure 2C). The nodes and edges definitions are saved in RDF format, as shown in Listing 3. The SQL queries used to fetch data from the source database are built as string templates and shown in Listing 4:
Listing 1. Example of Vertex.
Applsci 12 06106 i001
Applsci 12 06106 i002
Listing 2. Example of Edge.
Applsci 12 06106 i003
Listing 3. Example of Data migrated to a graph with RDF Syntax.
Applsci 12 06106 i004
Listing 4. SQL Queries.
Applsci 12 06106 i005
In Listing 4, the variables cols, table_name, table, src_fk, join_table and dst_pk are obtained from the schema.json file, which has its syntax explained in the next paragraphs. With the queries, tables can be converted into graph nodes, and relationships into graph edges, as shown in Figure 2.
Through the schema.json file, the middleware is aware of which data must be migrated. Such a file holds a syntax that correctly describes the data properties and relations. Listing 1 demonstrates an example of vertex syntax.
In Listing 1, vertices is an array in which N vertices can be defined. A vertex definition is composed of three property definitions, namely: (I) source—sets tableName to the table’s name in the source database (e.g., Postgres), and orderField to the result order; (II) target—sets vertexName to the vertex’s name in the target database (e.g., Dgraph). Given that Dgraph makes use of an ID field (analog to a primary key), a value must be provided as vertexID, which can be composed of N attributes; and (III) fields—a property that defines all of the attributes that are expected to be migrated from the source database (Postgres) to the target database (DGraph).
Once the vertices are created, it makes sense to connect them. In a graph, a connection between vertices is called an edge. An edge represents a relationship between vertices in the same way that a foreign key represents a relationship between tables. In the schema.json file, edges are defined in the syntax shown in Listing 2.
In Listing 2, edges is an array so that N edges can be defined. To define an edge, two properties must be defined: (I) source—data of the source database, which are needed to build the edge. PrimaryKey is the edge’s starting point, foreignKey is the edge’s ending point. Both data are found in the tableName field; (II) target—edge configuration in the target database. Here is defined where the edge begins (leftVertexName) and ends (rightVertexName), and the edge’s name (edgeName).
The examples presented in Listings 1 and 2 illustrate a migration from Postgres to Dgraph, following the UML diagram in Figure 3.
The architecture presented in Figure 1 can be extended to support migration between other source RDBMS and target graph databases, e.g., migration from MySQL to Neo4J. Accordingly, following the steps at a high level:
  • Create a schema.json file describing data that are expected to be migrated;
  • Create a class containing the necessary methods for connecting and querying the source database;
  • Create a class to handle incoming data from the source database and then generate a graph to the target database.
Regarding the steps above, it is important to note that AMANDA only migrates data to graph databases, such as Dgraph. Support for another graph database, as shown in the above example, can be added by implementing a Writer class, in the Graph Implementation module, such as DgraphRDFWriter. Similarly, to add support to another NoSQL database, such as MongoDB, it would be necessary to implement a new Writer class in both core and Graph Implementation modules.

4. Evaluation

In this section, we evaluate AMANDA efficiency by observing both feasibility and query response aspects, with respects to the four parameters below:
  • Migration Correctness: Queries on each entity (e.g., Suppliers, Products) were conducted on Dgraph in order to ensure that all table rows have been successfully migrated into the target database;
  • Query Execution: Six queries were performed in Postgres, MySQL, and Dgraph in order to ensure that the same queries in the source databases can be executed in the target one. In this context, Postgres and MySQL are both source databases, whereas Dgraph is the target database.
  • Query Performance: Time execution assessment of six queries in Postgres, MySQL, and Dgraph. This comparison intends to verify whether the migration between databases is worth being executed. Each query (Q1–Q6) ran 100 times for each database (Postgres, MySQL, and Dgraph), and we calculated the average execution. The queries are presented later in this section.
  • Migration Speed: The time required to produce a graph database starting from a Relational Database Management System (RDBMS). We executed and registered the execution time of the migration script in two scenarios: MySQL ⇒ Dgraph and Postgres ⇒ Dgraph.
Based on previous works described in Section 2 i.e., [36,37,42], we evaluated our solution with two different datasets: (a) Northwind [44] with 778 kilobytes presented in Section 4.1 and (b) the IMDB [45] dataset with 5.38 gigabyte, shown in Section 4.2. The Northwind database consists of data representing a small-business system with the following columns: suppliers, products, categories, employees, orders, customers, shippers, territories, and regions. When it comes to the IMDB dataset, it encompasses comment reviews for movies and TV shows with information about individual media titles, film crew, TV episodes, and user ratings. The AMANDA experiment’s performance was conducted with both a small and a big dataset, both presenting strong relationships between entities, making the datasets suitable for comparing RDBMS and graph databases.

4.1. Case Study with the Northwind Dataset

For this case study with Northwind, two versions of the same dataset were used in the experiment, thus allowing for a migration assessment for the two scenarios described in Migration Speed. Figure 4 shows the Northwind database schema.
Although the datasets are the same, the RDBMS source databases are not homogeneous. In Postgres, there are 91 customers, but in MySQL, there are 93 of them. The same goes for shippers. The number of rows for each table can be seen in Table 2 and Table 3.
Initially, we transformed Figure 4 into a graph, i.e., a set of vertices and edges. AMANDA carried out this process through querying the sources databases, i.e., Postgres and MySQL. Figure 5 illustrates the resulting transformation of the relational schema in a graph model.
The experiment was performed with docker-compose running on a computer with an Intel(R) Core(TM) i7-7700 CPU @ 3.60 GHz CPU, 32 gigabytes of RAM, and 1 terabyte of storage. The experiments results are shown and discussed on the following pages.
AMANDA’s data migration correctness is shown in Table 2 and Table 3. Both tables contain the number of elements for each relational database table or graph database label, e.g., the source databases contain eight elements in table categories. Once the migration process is completed, the target database should also contain eight elements in the categories label. Although tables employeeterritories and order_details exist in the RDBMS, this is not the case for the graph database, given that, in a graph database, associative tables are converted to edges.
Besides migration correctness, another significant metric is the time and the computational resources (e.g., memory and CPU) used by the migration process. Table 4 summarizes such metrics, indicating that the entire migration process took but 1 s to complete with both databases running on the same machine. The amount of RAM used was 19.47  MB, and the CPU time was 0.54 s for migration from Postgres to Dgraph; the CPU time for migration from MySQL to Dgraph was 0.57 s. The tools used to measure the performance was pyinstrument (https://pyinstrument.readthedocs.io/en/latest/home.html, accessed on 9 May 2022) for execution time, resource (https://docs.python.org/3/library/resource.html, accessed on 9 May 2022) for CPU usage and guppy (https://pypi.org/project/guppy3/, accessed on 9 May 2022) for memory consumption.
We also evaluated the query performance between databases through six queries extracted from Neo4J’s tutorial on importing relational data (https://neo4j.com/developer/guide-importing-data-and-etl/, accessed on 9 May 2022), namely:
(Q1) 
Find a sample of employees who sold orders together with the products contained in those orders.
(Q2) 
Find the supplier and category for a specific product.
(Q3) 
Which employee had the highest cross-selling count of ’chocolate’ AND some other product?
(Q4) 
How are the employees organized in terms of hierarchy and accountability? who reports to whom?
(Q5) 
How many orders were made by each part of the hierarchy?
(Q6) 
Which employees indirectly report to one another?
Figure 6 and Figure 7 illustrate a query performance comparison between databases. For queries Q1, Q2, Q3, and Q6, it is worth pointing out that Dgraph outperforms RDBMS. It is reasonable to say that this happens since, in a graph database, the relationships themselves are stored in its database. In contrast, in a RDBMS, the relationships are always determined in query run time, with the aid of several performance degrading join operations.
Both RDBMSs in this experiment (MySQL and Postgres) have shown the best performance when running queries Q4 and Q5, which is likely a consequence of single table data storage. The graph database (Dgraph), however, makes use of relationships. Regarding Q5, Dgraph generates a compound query composed of three levels of aggregation and relationships, thus sending more data. In contrast, Postgres and MySQL have just one level of aggregation and association, thus sending fewer data.
Besides the correctness and performance of the migration, a user must be able to query the same information from the target database that they would otherwise query from the source database. In Figure 8, Figure 9 and Figure 10, we present query results that were executed in both source and target databases. After the migration, the same data queried in the source database can also be queried in the target database, showing the success of the migration.

4.2. Case Study with IMDB Dataset Reviews

Since the scenario in Section 4.1 already demonstrated that the tool works with two sources RDBMS, for the case study with IMDB, the experiments considered only the scenario Postgres ⇒ Dgraph. Figure 11 shows the IMDB database schema.
To carry out the migration, we transformed Figure 11 into a graph, i.e., a set of vertices and edges. Then, AMANDA queries the source database, i.e., Postgres. Figure 12 illustrates the results for a relational schema to a graph model transformation.
Another important metric is the time and the computational resources (e.g., memory and CPU) used by the migration process. Table 5 summarizes such metrics, indicating that the entire migration process took 7 days to complete for Postgres. The amount of RAM used was 19.47 MB, and the CPU time was 26.25 min for migration from Postgres to Dgraph. The tools used to measure the performance was pyinstrument (https://pyinstrument.readthedocs.io/en/latest/home.html, accessed on 9 May 2022) for execution time, resource (https://docs.python.org/3/library/resource.html, accessed on 9 May 2022) for CPU usage and guppy (https://pypi.org/project/guppy3/, accessed on 9 May 2022) for memory consumption.
Apart from the migration performance, we also evaluated query performance between databases through the six queries below:
(Q1) 
What are the actors/actresses of movie “Carmencita”?
(Q2) 
What are the movies of genre “Action” and ended in 1960 or earlier?
(Q3) 
Which movies is Henner Hofmann known for?
(Q4) 
How many episodes has “The Bold and the Beautiful”?
(Q5) 
What are the other names of “The Unchanging Sea”?
(Q6) 
What title are “tvMiniSeries” and have the average rating 10?
Figure 13 illustrates a query performance comparison between databases. For queries Q3, Q4, and Q5, it is worth pointing out that Dgraph outperforms RDBMS. It is reasonable to say that this happens since the relationships in a graph database are stored in its database. In contrast, for RDBMS, the relationships are always determined in the course of a query run time, with the aid of several performances degrading join operations.
Postgres has shown the best performance when running queries Q1, Q2, and Q6, which is likely a consequence of, in this case, the Dgraph’s query structure, in which, it was necessary to filter values twice. This double filtering was not necessary in Postgres. Furthermore, in Dgraph, for queries Q1, Q2 and Q6, it was needed to enforce the filter in order to only return nodes with the desired relationship. This type of operation degrades the database performance. An example of Q6 is shown below in Figure 14.

5. Take Outs

AMANDA reads a configuration file containing the migration characteristic, such as table and attribute definitions for the migration process, and then migrates the tables and attributes. First, AMANDA turn the tables into graph nodes and, after that, creates the relationships between nodes.
To compare AMANDA with other methods proposed in the literature, we present a brief comparison and a table to summarize it. It is important to note that not all of the proposed works in the literature can be compared with ours. Some works, such as [11,18,21,34,35,38,43], do not use the same dataset that we have used. However, the  works [36,37,42] use the same dataset. Therefore we compare AMANDA with the works that use the same datasets and database paradigms as ours.
Starting with the Northwind dataset, when comparing the migration time taken by AMANDA and the work proposed by [42], AMANDA took 1 s to migrate 1107 Northwind dataset tuples, which is considerably faster than [42], whose migration tool would take 2610 s to complete the same process. The authors in [36,37] did not measure the time of the migration between database with the Northwind dataset.
Regarding the IMDB database, when comparing the migration time taken by AMANDA and the work proposed by [42], AMANDA took 168 h to migrate 174,728,704 tuples, whereas the proposed method by [42] would take 992 h. The authors in [37] did not measure the time of the migration between database with the IMDB dataset, while the proposed work in [36] does not have experiments with IMDB dataset.
It is important to note that the dataset sizes of Northwind and IMDB, when compared with the proposed works [36,37,42], are different. AMANDA migrates 1107 tuples of the Northwind dataset, whereas [37,42] migrate 3308 tuples. This is because, in our approach, associative tables do not need to be migrated, but they are used to build the edges between graph nodes. Similarly, for the IMDB dataset, we used a dataset with 174,728,704 tuples, whereas [42] used a subset of IMDB with 1,673,074 tuples. Therefore, to make the comparison in the above paragraphs, we used the rule of three.
Regarding the usage of computer resources, none of the proposed works that use the same dataset as ours have provided such metrics, e.g., CPU and RAM. Therefore, we cannot compare the resources used by AMANDA with the proposed works in the literature. Table 6 summarizes the performance comparison among works.
AMANDA has some qualitative features; for example, the configuration file schema.json provides enhanced flexibility when compared to [18], twhich uses the database’s metadata, or [37], which uses ER diagram traversal.
Our migration technique allows for querying the data in the form of a graph, as performed in [43]. AMANDA’s architecture is straightforward, formed by three elements of architecture: the source database, migration script, and the target databases needing no other interfaces to make the migration or make the data available for querying.
AMANDA’s superior performance demonstrates that relying on a schema.json, which is provided beforehand, and contains every table and attribute to be migrated, is superior to relying on an algorithm-based approach that will instead seek the database metadata or traverse the ER diagram.
Our solution demonstrates the best performance and flexibility compared with most previous studies that provide similar test information.

6. Conclusions

This article presents AMANDA, a middleware to migrate data from a Relational Database Management System (RDBMS) to a graph database by providing flexibility to users in the migration process, with schema.json file, which permits users to choose the whole database or just a part of it to be migrated. Furthermore, the AMANDA middleware provides an architecture that can be expanded to support other RDBMSs and graph databases. The experiments were performed with MySQL and Postgres, both RDBMSs, as source databases, and Dgraph as the target database, for the Northwind dataset.
For the IMDB dataset, with 174,728,704 tuples, the experiment was performed only on Postgres as the source database. The migration time between Postgres and Dgraph took 168 h. On the other hand, for Northwind, with 1107 tuples, it took 1 s to migrate from Postgres do Dgraph.
The main contributions of AMANDA are (a) its flexibility in allowing the users to define specific tables and attributes to migrate, not requiring the migration of all of the databases, as in the previous works; (b) its adaptability and integration to any database based on RDBMS and non-relational paradigms, such as a graph database, and (c) its direct query use mode, which does not require additional packages or other specific knowledge, facilitating developer use; (d) a high (100%) reliability and excellent execution time (1s) in n data migration, surpassing related works.
In future work, we envision: (a) that the computational performance evaluation uses large complex databases composed of images, videos, texts, and tests with memory and CPU and parallel environments; (b) an exploration of a machine-learning or complex-networks-based approach to detect the similarity of the data storage and group the new data considering the migration time characteristics, allowing for the best performances of information retrieval.

Author Contributions

Conceptualization, P.M.F., T.A.F. and F.T.G.; methodology, T.A.F. and F.T.G.; validation, J.S.Q., F.L.S., T.B.F.S. and P.V.V.P.C.; writing—original draft preparation, P.V.V.P.C., T.A.F., P.M.F., F.T.G., T.B.F.S. and F.S.S.; writing—review and editing, J.S.Q., F.S.S., F.T.G. and F.L.S.; Investigation, P.M.F., T.A.F., F.L.S., T.B.F.S., J.S.Q., P.V.V.P.C., F.S.S. and F.T.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by Samsung Eletrônica da Amazônia Ltda., under the stimulus of the Brazilian Informatics Law n° 8.387/91.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Northwind dataset can be found in https://code.google.com/archive/p/northwindextended/downloads, accessed on 9 May 2022. IMDB dataset can be found in https://datasets.imdbws.com/, accessed on 9 May 2022.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jaakkola, H.; Thalheim, B. Sixty years–and more–of data modelling. Inf. Model. Knowl. Bases 2021, 32, 56. [Google Scholar]
  2. Kellou-Menouer, K.; Kardoulakis, N.; Troullinou, G.; Kedad, Z.; Plexousakis, D.; Kondylakis, H. A survey on semantic schema discovery. VLDB J. 2021, 1–36. [Google Scholar] [CrossRef]
  3. Hamouda, S.; Zainol, Z. Document-Oriented Data Schema for Relational Database Migration to NoSQL. In Proceedings of the 2017 International Conference on Big Data Innovations and Applications (Innovate-Data), Prague, Czech Republic, 21–23 August 2017; pp. 43–50. [Google Scholar] [CrossRef]
  4. Giuntini, F.T.; de Moraes, K.L.; Cazzolato, M.T.; de Fátima Kirchner, L.; Dos Reis, M.d.J.D.; Traina, A.J.M.; Campbell, A.T.; Ueyama, J. Modeling and Assessing the Temporal Behavior of Emotional and Depressive User Interactions on Social Networks. IEEE Access 2021, 9, 93182–93194. [Google Scholar] [CrossRef]
  5. Lee, C.H.; Zheng, Y.L. SQL-to-NoSQL schema denormalization and migration: A study on content management systems. In Proceedings of the 2015 IEEE International Conference on Systems, Man, and Cybernetics, Hong Kong, China, 9–12 October 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 2022–2026. [Google Scholar]
  6. Fonseca, S.C.; Lucena, M.C.; Reis, T.M.; Cabral, P.F.; Silva, W.A.; de Santos, S.F.; Giuntini, F.T.; Sales, J. Automatically Deciding on the Integration of Commits Based on Their Descriptions. In Proceedings of the 2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE), Melbourne, Australia, 15–19 November 2021; pp. 1131–1135. [Google Scholar] [CrossRef]
  7. Machado, R.d.S.; Pires, F.d.S.; Caldeira, G.R.; Giuntini, F.T.; Santos, F.d.S.; Fonseca, P.R. Towards Energy Efficiency in Data Centers: An Industrial Experience Based on Reuse and Layout Changes. Appl. Sci. 2021, 11, 4719. [Google Scholar] [CrossRef]
  8. Freitas, H.; Faiçal, B.S.; Cardoso e Silva, A.V.; Ueyama, J. Use of UAVs for an efficient capsule distribution and smart path planning for biological pest control. Comput. Electron. Agric. 2020, 173, 105387. [Google Scholar] [CrossRef]
  9. Meneguette, R.; De Grande, R.; Ueyama, J.; Filho, G.P.R.; Madeira, E. Vehicular Edge Computing: Architecture, Resource Management, Security, and Challenges. ACM Comput. Surv. 2021, 55, 1–46. [Google Scholar] [CrossRef]
  10. Schulte, J.P.; Giuntini, F.T.; Nobre, R.A.; Nascimento, K.C.d.; Meneguette, R.I.; Li, W.; Gonçalves, V.P.; Rocha Filho, G.P. ELINAC: Autoencoder Approach for Electronic Invoices Data Clustering. Appl. Sci. 2022, 12, 3008. [Google Scholar] [CrossRef]
  11. El Hayat, S.A.; Bahaj, M. Modeling and Transformation from Temporal Object Relational Database into Mongodb: Rules. Adv. Sci. Technol. Eng. Syst. J. 2020, 5, 618–625. Available online: https://astesj.com/v05/i04/p73/ (accessed on 4 February 2022). [CrossRef]
  12. Giuntini, F.T.; Ueyama, J. Explorando a Teoria de Grafos e Redes Complexas na Análise de Estruturas de Redes Sociais: Um Estudo de Caso Com a Comunidade Online Reddit. 2017. Available online: https://www.researchgate.net/publication/317137094_Explorando_a_teoria_de_grafos_e_redes_complexas_na_analise_de_estruturas_de_redes_sociais_Um_estudo_de_caso_com_a_comunidade_online_Reddit (accessed on 3 January 2022).
  13. Cazzolato, M.T.; Giuntini, F.T.; Ruiz, L.P.; de Kirchner, F.L.; Passarelli, D.A.; de Jesus Dutra dos Reis, M.; Traina, C., Jr.; Ueyama, J.; Traina, A.J.M. Beyond Tears and Smiles with ReactSet: Records of Users’ Emotions in Facebook Posts. In Proceedings of the XXXIV Simpósio Brasileiro de Banco de Dados—Dataset Showcase Workshop (SBBD-DSW), Fortaleza, Brazil, 7–10 October 2019; pp. 1–12. [Google Scholar]
  14. Namdeo, B.; Suman, U. Schema design advisor model for RDBMS to NoSQL database migration. Int. J. Inf. Technol. 2021, 13, 277–286. [Google Scholar] [CrossRef]
  15. Giuntini, F.T.; de Moraes, K.L.P.; Cazzolato, M.T.; Kirchner, L.d.F.; Dos Reis, M.d.J.D.; Traina, A.J.M.; Campbell, A.T.; Ueyama, J. Tracing the Emotional Roadmap of Depressive Users on Social Media Through Sequential Pattern Mining. IEEE Access 2021, 9, 97621–97635. [Google Scholar] [CrossRef]
  16. Oracle. What is Big Data? Big Data Defined. 2022. Available online: www.oracle.com/big-data/what-is-big-data/ (accessed on 4 October 2021).
  17. Hariri, R.H.; Fredericks, E.M.; Bowers, K.M. Uncertainty in big data analytics: Survey, opportunities, and challenges. J. Big Data 2019, 6, 1–16. [Google Scholar] [CrossRef] [Green Version]
  18. Unal, Y.; Oguztuzun, H. Migration of data from relational database to graph database. In Proceedings of the 8th International Conference on Information Systems and Technologies, Amman, Jordan, 11–12 July 2018; pp. 1–5. [Google Scholar]
  19. Rybiński, H. On First-Order-Logic Databases. ACM Trans. Database Syst. 1987, 12, 325–349. [Google Scholar] [CrossRef]
  20. Freitas, A.; Sales, J.E.; Handschuh, S.; Curry, E. How hard is this query? Measuring the Semantic Complexity of Schema-agnostic Queries. In Proceedings of the 11th International Conference on Computational Semantics, London, UK, 14–17 April 2015; Association for Computational Linguistics: London, UK, 2015; pp. 294–304. [Google Scholar]
  21. Namdeo, B.; Suman, U. A Model for Relational to NoSQL database Migration: Snapshot-Live Stream Db Migration Model. In Proceedings of the 2021 7th International Conference on Advanced Computing and Communication Systems (ICACCS), Coimbatore, India, 19–20 March 2021; Volume 1, pp. 199–204. [Google Scholar] [CrossRef]
  22. PostgreSQL Global Development Group PostgreSQL—The World’s Most Advanced Open Source Relational Database. Available online: https://www.postgresql.org/ (accessed on 9 May 2022).
  23. MySQL, MySQL—The World’s Most Popular Open Source Database. Available online: https://dev.mysql.com/doc/ (accessed on 9 May 2022).
  24. Oracle Database. Available online: https://www.oracle.com/database/ (accessed on 9 May 2022).
  25. Chamberlin, D.D.; Boyce, R.F. SEQUEL: A Structured English Query Language. In Proceedings of the 1974 ACM SIGFIDET (Now SIGMOD) Workshop on Data Description, Access and Control, Ann Arbor, MI, USA, 1–3 May 1974; Association for Computing Machinery: New York, NY, USA, 1974; pp. 249–264. [Google Scholar] [CrossRef]
  26. Dormando. A Distributed Memory Object Caching System. Available online: http://memcached.org/ (accessed on 5 May 2022).
  27. The Application Data Platform. 2022. Available online: https://www.mongodb.com/ (accessed on 5 May 2022).
  28. Stax, D. Apache Cassandra: About Transactions and Concurrency Control. Available online: https://docs.datastax.com/en/cassandra-oss/2.1/cassandra/dml/dl_about_transactions_c.html (accessed on 21 March 2022).
  29. Neo4j. Concepts: Nosql to Graph—Developer Guides. Available online: https://neo4j.com/developer/graph-db-vs-nosql/ (accessed on 9 May 2022).
  30. Khasawneh, T.N.; AL-Sahlee, M.H.; Safia, A.A. SQL, NewSQL, and NOSQL Databases: A Comparative Survey. In Proceedings of the 2020 11th International Conference on Information and Communication Systems (ICICS), Irbid, Jordan, 7–9 April 2020; pp. 13–21. [Google Scholar] [CrossRef]
  31. Li, Y.; Manoharan, S. A performance comparison of SQL and NoSQL databases. In Proceedings of the 2013 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM), Victoria, BC, Canada, 27–29 August 2013; pp. 15–19. [Google Scholar] [CrossRef]
  32. Martins, P.; Abbasi, M.; Sá, F. A study over NoSQL performance. In Proceedings of the World Conference on Information Systems and Technologies, Galicia, Spain, 16–19 April 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 603–611. [Google Scholar]
  33. Falcão, T.A.; Furtado, P.M.; Queiroz, J.S.; Matos, P.J.; Antunes, T.F.; Carvalho, F.S.; Fonseca, P.C.; Giuntini, F.T. Comparative Analysis of Graph Databases for Git Data. J. Phys. Conf. Ser. 2021, 1944, 012004. [Google Scholar] [CrossRef]
  34. Orel, O.; Zakošek, S.; Baranovič, M. Property oriented relational-to-graph database conversion. Automatika 2016, 57, 836–845. [Google Scholar] [CrossRef] [Green Version]
  35. Sayeb, Y.; Ayari, R.; Naceur, S.; Ghézala, H.B. From Relational Database to Big Data: Converting Relational to Graph Database, MOOC Database as Example. J. Ubiquitous Syst. Pervasive Netw. 2017, 8, 15–20. [Google Scholar] [CrossRef]
  36. Vyawahare, H.R.; Karde, P.P.; Thakare, V.M. An efficient graph database model. Int. J. Innov. Technol. Explor. Eng. 2019, 88, 1292–1295. [Google Scholar]
  37. Nan, Z.; Bai, X. The study on data migration from relational database to graph database. J. Phys. Conf. Ser. 2019, 1345, 022061. [Google Scholar] [CrossRef]
  38. Kim, H.J.; Ko, E.J.; Jeon, Y.H.; Lee, K.H. Techniques and guidelines for effective migration from RDBMS to NoSQL. J. Supercomput. 2020, 76, 7936–7950. [Google Scholar] [CrossRef]
  39. De Virgilio, R.; Maccioni, A.; Torlone, R. Converting relational to graph databases. In Proceedings of the First International Workshop on Graph Data Management Experiences and Systems, New York, NY, USA, 23–24 June 2013; pp. 1–6. [Google Scholar]
  40. Palod, S. Transformation of Relational DATABASE domain into Graph-Based Domain for Graph-Based Data Mining; The University of Texas at Arlington: Arlington, TX, USA, 2004. [Google Scholar]
  41. De Virgilio, R.; Maccioni, A.; Torlone, R. R2G: A Tool for Migrating Relations to Graphs. EDBT 2014, 2014, 640–643. [Google Scholar]
  42. Megid, Y.A.; El-Tazi, N.; Fahmy, A. Using functional dependencies in conversion of relational databases to graph databases. In Proceedings of the International Conference on Database and Expert Systems Applications, Regensburg, Germany, 3–6 September 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 350–357. [Google Scholar]
  43. Sokolova, M.V.; Gómez, F.J.; Borisoglebskaya, L.N. Migration from an SQL to a hybrid SQL/NoSQL data model. J. Manag. Anal. 2020, 7, 1–11. [Google Scholar] [CrossRef]
  44. Yugabyte. About the Northwind Sample Database. Available online: https://docs.yugabyte.com/latest/sample-data/northwind/#about-the-northwind-sample-database (accessed on 21 March 2022).
  45. IMDB. IMDb Datasets. Information Courtesy of IMDb (https://www.imdb.com). Used with Permission. Available online: https://www.imdb.com/interfaces/ (accessed on 15 May 2022).
Figure 1. AMANDA middleware architecture.
Figure 1. AMANDA middleware architecture.
Applsci 12 06106 g001
Figure 2. AMANDA workflow. (A) query tables, (B) create nodes in graph and (C) make relationships between nodes. orders_details is an associative table; it is converted into an edge in the graph.
Figure 2. AMANDA workflow. (A) query tables, (B) create nodes in graph and (C) make relationships between nodes. orders_details is an associative table; it is converted into an edge in the graph.
Applsci 12 06106 g002
Figure 3. UML Diagram representing the migration between Postgres and Dgraph.
Figure 3. UML Diagram representing the migration between Postgres and Dgraph.
Applsci 12 06106 g003
Figure 4. Northwind relational schema.
Figure 4. Northwind relational schema.
Applsci 12 06106 g004
Figure 5. Northwind graph schema.
Figure 5. Northwind graph schema.
Applsci 12 06106 g005
Figure 6. Query performance comparison between Postgres and Dgraph databases.
Figure 6. Query performance comparison between Postgres and Dgraph databases.
Applsci 12 06106 g006
Figure 7. Query performance comparison between MySQL and Dgraph databases.
Figure 7. Query performance comparison between MySQL and Dgraph databases.
Applsci 12 06106 g007
Figure 8. Query output for Q1. Left: Source database (Postgres). Right: Target database (Dgraph).
Figure 8. Query output for Q1. Left: Source database (Postgres). Right: Target database (Dgraph).
Applsci 12 06106 g008
Figure 9. Query output for Q2. Left: Source database (Postgres). Right: Target database (Dgraph).
Figure 9. Query output for Q2. Left: Source database (Postgres). Right: Target database (Dgraph).
Applsci 12 06106 g009
Figure 10. Query output for Q3. Left: Source database (Postgres). Right: Target database (Dgraph).
Figure 10. Query output for Q3. Left: Source database (Postgres). Right: Target database (Dgraph).
Applsci 12 06106 g010
Figure 11. IMDB relational schema.
Figure 11. IMDB relational schema.
Applsci 12 06106 g011
Figure 12. IMDB graph schema.
Figure 12. IMDB graph schema.
Applsci 12 06106 g012
Figure 13. Query performance comparison between Postgres and Dgraph databases for IMDB datase.
Figure 13. Query performance comparison between Postgres and Dgraph databases for IMDB datase.
Applsci 12 06106 g013
Figure 14. Q6 example for IMDB.
Figure 14. Q6 example for IMDB.
Applsci 12 06106 g014
Table 1. Related works.
Table 1. Related works.
AuthorSQL DBNonSQL SBDatasetApproach
Orel et al.IBM InformixNeo4JA small set of data from
dba.stackexchange.com,
accessed on 9 May 2022
Read database metadata
to obtain tables information
Sayeb et al.MySQLNeo4JDatabase from MOOCRead database metadata
to obtain tables information
Unal et al.MySQLNeo4JLegal Document SystemSchemaCrawler and
Java SQL Library
Megid et al.SQL ServerNeo4jNorthwind,
Wikipedia-2008 subset,
IMDB subset
Functional dependencies
Nan et al.SQL ServerNeo4JNorthwind and IMDBEntity Relation (ER)
diagram
Vyawahare et al.MySQLNeo4JNorthwindRead database metadata
to obtain tables information
Hayat et al.OracleDBMongoDBN/AFormal rules and
source’s DB ER
diagram
Sokolova et al.MySQLMySQL +
Apache Jena Fuseki
Retail business
company
Ontology and
combination of SQL
and NoSQL DB
Kim et al.MySQLHBase + PhoenixTPC-HQuery translation
and denormalization
Namdeo et al.MySQLMongoDBDatabase of an
academic department
Database snapshot
and streaming of
changed data
Our SolutionPostgresDgraphNorthwind and IMDBDirect query the database
to migrate data
specified in schema.json
Table 2. Number of migrated elements from Postgres to Dgraph.
Table 2. Number of migrated elements from Postgres to Dgraph.
Databases
EntityPostgresDgraph
categories88
customers9191
employees99
orders830830
products7777
region44
shippers66
suppliers2929
territories5353
Table 3. Number of migrated elements from MySQL to Dgraph.
Table 3. Number of migrated elements from MySQL to Dgraph.
Databases
EntityMySQLDgraph
categories88
customers9393
employees99
orders830830
products7777
region44
shippers33
suppliers2929
territories5353
Table 4. Migration process evaluation for the migration scenarios Postgres ⇒ Dgraph and MySQL ⇒ Dgraph. Postgres and MySQL are both source RDBMS.
Table 4. Migration process evaluation for the migration scenarios Postgres ⇒ Dgraph and MySQL ⇒ Dgraph. Postgres and MySQL are both source RDBMS.
Source DBTarget DBExecution Time (s)Memory (MB)CPU Time (s)
PostgresDGraph119.470.54
MySQLDGraph119.470.57
Table 5. Migration process evaluation in the scenario Postgres ⇒ Dgraph. Postgres is the source RDBMS.
Table 5. Migration process evaluation in the scenario Postgres ⇒ Dgraph. Postgres is the source RDBMS.
Source DBTarget DBExecution Time (h)Memory (MB)CPU Time (m)
PostgresDGraph16819.4726.65
Table 6. Comparison of the migration performance. NP: not provided. Vyawahare did not use IMDB; therefore, N/A.
Table 6. Comparison of the migration performance. NP: not provided. Vyawahare did not use IMDB; therefore, N/A.
NorthwindIMDB
WorkTimeRAMCPUTimeRAMCPU
AMANDA1 (s)19.470.54168 (h)19.4726.65
Megid et al.26.10 (s)NPNP992 (h)NPNP
Nan et al.NPNPNPNPNPNP
Vyawahare et al.NPNPNPN/AN/AN/A
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Queiroz, J.S.; Falcão , T.A.; Furtado, P.M.; Soares, F.L.; Souza, T.B.F.; Cleis, P.V.V.P.; Santos, F.S.; Giuntini, F.T. AMANDA: A Middleware for Automatic Migration between Different Database Paradigms. Appl. Sci. 2022, 12, 6106. https://doi.org/10.3390/app12126106

AMA Style

Queiroz JS, Falcão  TA, Furtado PM, Soares FL, Souza TBF, Cleis PVVP, Santos FS, Giuntini FT. AMANDA: A Middleware for Automatic Migration between Different Database Paradigms. Applied Sciences. 2022; 12(12):6106. https://doi.org/10.3390/app12126106

Chicago/Turabian Style

Queiroz, Jordan S., Thiago A. Falcão , Phillip M. Furtado, Fabrício L. Soares, Tafarel Brayan F. Souza, Pedro Vitor V. P. Cleis, Flavia S. Santos, and Felipe T. Giuntini. 2022. "AMANDA: A Middleware for Automatic Migration between Different Database Paradigms" Applied Sciences 12, no. 12: 6106. https://doi.org/10.3390/app12126106

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop