Evaluation of the Impact of AES Encryption on Query Read Performance Across Oracle, MySQL, and SQL Server Databases
Round 1
Reviewer 1 Report
Comments and Suggestions for Authors
The manuscript evaluates the performance impact of AES-based Transparent Data Encryption (TDE) on read-query workloads using the TPC-H benchmark across Oracle 19c, MySQL (8.0.38 and 9.2.0), and Microsoft SQL Server 2022. Experiments use three scale factors (SF=0.1, 1, 10) and measure cumulative execution time for the 22 TPC-H queries and system-level metrics (CPU, available RAM, disk time). The authors report that SQL Server shows the best overall scalability and that enabling TDE introduces observable overheads that vary by DBMS and AES key size.
Database encryption and its performance cost is an important practical question for industry and research. Use of the TPC-H benchmark is appropriate for OLAP-style read workloads and supports generalisability to decision-support queries. Comparative cross-vendor evaluation (Oracle, MySQL, SQL Server) is valuable and of interest to readers choosing encryption strategies.
The study addresses an interesting and practically relevant topic, but there are important methodological and presentation issues that must be addressed before the manuscript can be accepted.
Some recommendations for improving the manuscript:
Please check whether you have followed all the requirements of the manuscript formatting template. For example, after Figure 1, Table 1, etc. what separator is used – a blank space or a dash before the descriptive text.
Check whether you have explained all abbreviations in the text at their first appearance, as is customary.
For each figure/table, etc. that is not the author's, please specify the literature source from which it was borrowed.
Provide full, exact hardware and virtualization details including AES-NI capability.
Share all scripts and configuration files (schema DDL, index scripts, DBMS config, and test-run scripts) in the repository and reference specific commit/version.
Report per-query results (mean, median, std, and boxplots) and add statistical significance testing between encrypted and unencrypted runs.
Clarify performance counter methodology, correct or explain any puzzling counter values (e.g., Disk Time >100%).
Investigate and explain anomalous results by including query plans, buffer-pool stats, and I/O latency/throughput diagnostics for representative cases.
Revise claims to reflect experimental scope and limitations; add a dedicated limitations subsection.
Substantially improve table formatting and add visual summaries (normalized bars or line plots) to highlight trends.
Perform careful English copy-editing to improve readability.
Comments on the Quality of English Language
English could be improved to more clearly express the research.
Author Response
We would like to thank the reviewer for his very helpful and detailed comments.
Comment 1 - Please check whether you have followed all the requirements of the manuscript formatting template. For example, after Figure 1, Table 1, etc. what separator is used – a blank space or a dash before the descriptive text.
Response 1 - We thank the reviewer for noting this. All figures and tables have been revised to follow the journal’s formatting template. Captions now use a period after the figure or table number (e.g., Figure 1. – …) and are formatted consistently throughout the manuscript.
Comment 2 - Check whether you have explained all abbreviations in the text at their first appearance, as is customary.
Response 2 - We thank the reviewer for the reminder. All abbreviations (e.g., TDE, AES, DBMS, OLAP, CPU, RAM) are now defined at first mention in the text.
Comment 3 - For each figure/table, etc. that is not the author's, please specify the literature source from which it was borrowed.
Response 3 - We thank the reviewer for this observation. Figures 2, 3, 4, and 5 were created entirely by the authors. The only figure not produced by the authors is Figure 1, which was adapted from the literature and already included its source citation.
Comment 4 - Provide full, exact hardware and virtualization details including AES-NI .
Response 4 - We thank the reviewer for this comment. We have updated the manuscript to include the full hardware and virtualization details: the tests were executed on a VMware ESXi cluster composed of three Dell servers with Intel® Xeon® E5-2670 v3 processors (with AES-NI support) and 64 GB of RAM per node. A detailed description of the environment has been added to the Materials and Methods section (lines 231–236).
Comment 5 - Share all scripts and configuration files (schema DDL, index scripts, DBMS config, and test-run scripts) in the repository and reference specific commit/version.
Response 5 - We thank the reviewer for this comment. The DDL and index scripts used in the experiments are the standard files provided by the TPC-H toolkit, and the test-run scripts developed for this study have now been included in the repository. The DBMS configuration files used for Oracle, MySQL, and SQL Server have also been added. All scripts are organised in the project folder Scripts/Queries-tbl, and the DBMS configuration files are stored in DBMS Config.
Comment 6 - Report per-query results (mean, median, std, and boxplots) and add statistical significance testing between encrypted and unencrypted runs.
Response 6 - We thank the reviewer for this helpful recommendation. The study reports mean and standard-deviation values; however, no formal significance testing was performed, as the focus is on comparative performance tendencies. A clarifying paragraph has been added at the end of Section 4 (line 706-712) noting this descriptive scope and indicating that future work will include formal statistical analysis. “The analysis presented in this paper is descriptive, based on averages and standard deviations to illustrate overall performance trends. Statistical significance across configurations was not formally assessed, as the focus was on comparative tendencies rather than inferential testing. Future research could include formal significance analysis to better quantify the strength of the observed differences.”
Comment 7 - Clarify performance counter methodology, correct or explain any puzzling counter values (e.g., Disk Time >100%).
Response 7 - We thank the reviewer for this observation. Section 3.3 now explains that counters such as % Disk Time can occasionally exceed 100 % because Windows Performance Monitor aggregates overlapping I/O on multi-core, thus overstating utilization. “The Windows Performance Monitor counters used (e.g., % Processor Time, Available Mbytes, and % Disk Time) provide coarse estimates and can overstate utilization on multi-core, occasionally reporting values above 100% due to overlapping or aggregated I/O operations.” Lines (315-318)
Comment 8 - Investigate and explain anomalous results by including query plans, buffer-pool stats, and I/O latency/throughput diagnostics for representative cases.
Response 8 - We thank the reviewer for this valuable suggestion. We agree that analyzing query plans, buffer-pool statistics, and I/O latency would provide deeper insight into anomalous results. These diagnostic analyses were beyond the current study’s scope, being this an academic study with time and budget restrictions but will be incorporated in future work to strengthen the interpretation of performance differences.
Comment 9 - Revise claims to reflect experimental scope and limitations; add a dedicated limitations subsection.
Response 9 - I thank the reviewer for this comment. I agree that the scope of the study and its limitations needed to be stated more clearly. This has now been addressed at the end of Section 4, where it was added the main limitations.
Comment 10 - Substantially improve table formatting and add visual summaries (normalized bars or line plots) to highlight trends.
Response 10 - We thank the reviewer for the constructive feedback. We acknowledge that adding normalized visual summaries such as bar or line plots would improve data presentation. Due to space and scope constraints, these enhancements will be included in our next paper.
Comment 11 - Perform careful English copy-editing to improve readability.
Response 11 - We thank the reviewer for this helpful feedback. The manuscript underwent complete English copy-editing to correct typographical and phrasing errors, improve clarity, and ensure that figure captions and related-work descriptions are concise and grammatically accurate.
Reviewer 2 Report
Comments and Suggestions for Authors
The paper intends to evaluate the impact of AES encryption on the performance of SQL Server, Oracle, and MySQL when using Transparent Data Encryption (TDE) with the TPC-H benchmark across 15 different scale factors. After analyzing the paper, I identified the following issues:
- Limited novelty: The study only confirms, in a simple lab scenario using three low-performance virtual machines, results that are already well established—namely, the throughput and latency of AES encryption on database servers.
- Hardware dependency: When evaluating a database process (e.g., query performance, benchmarks like TPC-H, or encryption overhead such as AES/TDE), the choice of hardware is critical because results heavily depend on machine characteristics. A relevant experimental setup typically requires 4-32 nodes. Each node should ideally have 32–64 CPU cores with AES-NI support, 64 or more threads, 512 GB–1 TB of RAM, and NVMe SSD storage. From this perspective, the experimental scenario described in the paper is overly simplistic.
- Language and clarity issues: The paper contains numerous grammar mistakes, poorly constructed sentences, and confusing expressions. Some parts are hard to read and understand. Examples include:
a) Lines 432–433: The sentence is poorly constructed and needs rephrasing.
b) Lines 438–439: The meaning of “key size rotation” is unclear in context.
c) Line 440: The sentence “First, it is important to tell Oracle where the database binaries live” is confusing; the verbs “tell” and “live” are awkward.
d) Lines 304–305: The sentence “Rijndael finished its development in 2001 with Vincent Rijmen and Joan Daeman” is misleading; Rijndael is the original name of AES, derived from the authors’ names.
Please review the entire manuscript for similar issues. - Claimed contributions: The paper claims three contributions. However, two of them—“Highlighting the trade-offs between security and system performance” (line 54) and “A comparison of different existing encryption algorithms” (line 55)—are only superficially addressed.
- Keywords: I suggest replacing the abbreviations “TDE” and “TPC-H” with their full forms.
- Table 1: The number of citations is irrelevant in this context. This column may be deleted.
- Figure 3: The word “Throughput” is unnecessarily capitalized in the figure caption.
Author Response
We would like to thank the reviewer for his very helpful and detailed comments.
Comment 1 - Limited novelty: The study only confirms, in a simple lab scenario using three low-performance virtual machines, results that are already well established—namely, the throughput and latency of AES encryption on database servers.
Response 1 - We thank the reviewer for this insightful comment. We acknowledge that the experimental environment is limited in scale and does not reflect high-end production hardware. The study was conducted within an academic context, where both hardware and software resources are constrained. The use of three virtual machines allowed full control over the DBMS configurations, encryption settings, and benchmarking process, ensuring consistency and reproducibility despite the limited hardware capacity. We have now made this constraint explicit in the revised manuscript and clarified that our conclusions apply primarily to this controlled, resource-constrained scenario. We also highlight this as an avenue for future work, where we plan to extend the evaluation to larger-scale and more heterogeneous production-like environments.
Comment 2 - Hardware dependency: When evaluating a database process (e.g., query performance, benchmarks like TPC-H, or encryption overhead such as AES/TDE), the choice of hardware is critical because results heavily depend on machine characteristics. A relevant experimental setup typically requires 4-32 nodes. Each node should ideally have 32–64 CPU cores with AES-NI support, 64 or more threads, 512 GB–1 TB of RAM, and NVMe SSD storage. From this perspective, the experimental scenario described in the paper is overly simplistic.
Response 2 - We thank the reviewer for highlighting the importance of hardware resources. The experimental setup was constrained to laboratory conditions using three virtual machines to ensure consistent control over software configurations and encryption mechanisms. We clarified this limitation in the Future Work section, noting that future experiments will scale to multi-node environments with higher-end hardware to validate the observed trends under heavier workloads.
Comment 3 - Language and clarity issues: The paper contains numerous grammar mistakes, poorly constructed sentences, and confusing expressions. Some parts are hard to read and understand. Examples include:
a) Lines 432–433: The sentence is poorly constructed and needs rephrasing.
b) Lines 438–439: The meaning of “key size rotation” is unclear in context.
c) Line 440: The sentence “First, it is important to tell Oracle where the database binaries live” is confusing; the verbs “tell” and “live” are awkward.
d) Lines 304–305: The sentence “Rijndael finished its development in 2001 with Vincent Rijmen and Joan Daeman” is misleading; Rijndael is the original name of AES, derived from the authors’ names.
Please review the entire manuscript for similar issues.
We thank the reviewer for identifying these issues. All language and grammar errors were corrected throughout the manuscript. Specific corrections include:
Response 3.a – We thank the reviewer for pointing this out. The sentence has been rephrased to improve clarity and readability, and the updated version now reads: “Transparent Data Encryption (TDE) is a feature used to encrypt data at rest, protecting sensitive information stored in tablespaces and columns. This ensures that, even if the database files are accessed outside the Oracle environment, the data remains unreadable.” Lines (455–457)
Response 3.b – We thank the reviewer for highlighting this point. The sentence has been revised to clarify the meaning of rekeying in Oracle TDE. The updated version now reads: “Oracle requires a wallet, a secure location that stores the TDE master encryption key. Before that, it is necessary to set the environment variables, initialize the keystore, and create it. Once this setup is complete, it becomes possible to create an encrypted tablespace and to rotate the encryption key periodically, replacing the existing key with a new one, thereby improving cryptographic security and, if desired, applying a new AES key size during the rekeying process.” Lines (460–467)
Response 3.c – We thank the reviewer for pointing this out. The sentence has been rewritten to improve clarity and remove the awkward phrasing. The revised version now reads: “First, it is necessary to specify the directory where the Oracle database binaries are installed by setting the environment variable ORACLE_HOME in the command line (e.g., set ORACLE_HOME=C:\oracle19c).” Lines (465–467)
Response 3.d – We thank the reviewer for the clarification. The sentence has been corrected to avoid the misleading implication about the development of Rijndael. The revised version now reads: “Rijndael, developed by Vincent Rijmen and Joan Daemen, was selected in 2001 by NIST as the official Advanced Encryption Standard (AES) [38].” Lines (321–323)
In addition, the entire paper underwent copy-editing to improve clarity and readability.
Comment 4 - Claimed contributions: The paper claims three contributions. However, two of them—“Highlighting the trade-offs between security and system performance” (line 54) and “A comparison of different existing encryption algorithms” (line 55)—are only superficially addressed.
Response 4 - We thank the reviewer for this helpful comment. The contributions section was revised to clarify and better align the stated contributions with the study’s actual outcomes. The second and third bullet points now focus on quantifying encryption overhead and comparing AES key sizes across DBMS implementations, rather than broadly stating trade-offs or algorithm comparisons.
Comment 5 - Keywords: I suggest replacing the abbreviations “TDE” and “TPC-H” with their full forms.
Response 5 - We thank the reviewer for the suggestion. The full forms of these abbreviations, Transparent Data Encryption (TDE) and Transaction Processing Performance Council Benchmark H (TPC-H), are now defined at first appearance in the text. Since MDPI requires concise keyword lists, the abbreviations were retained in the keyword section after being defined in the body.
Comment 6 - Table 1: The number of citations is irrelevant in this context. This column may be deleted.
Response 6 - I thank the reviewer for this comment. I understand the concern that citation counts may not always be meaningful in this context. However, I chose to keep this column because it provides readers with an additional indication of how visible each study is within the academic community. I also acknowledge that more recent papers naturally have fewer citations than older ones, which may not fully reflect their actual relevance.
Comment 7 - Figure 3: The word “Throughput” is unnecessarily capitalized in the figure caption.
Response 7 - Figure 3 caption capitalization
We thank the reviewer for pointing this out. All figure captions were reviewed for consistency and capitalization; the word “Throughput” in Figure 3 has been corrected.
Reviewer 3 Report
Comments and Suggestions for Authors
The manuscript investigates how enabling Transparent Data Encryption with AES affects query read performance on Oracle, MySQL, and SQL Server using TPC-H at multiple scale factors. The authors implement power and throughput runs, collect elapsed time and system resource counters, and report that SQL Server is generally the most efficient while TDE introduces overhead relative to plaintext baselines. The study is timely and practically relevant for security-conscious data platforms. However, I recommend Major Revision for the following reasons.
- Add a short “Security model and scope” section defining attacker goals, data exposure windows, operational controls, and why TDE is the correct control for the studied use cases. Contrast with application-level encryption, deterministic encryption for searchable columns, and HSM-backed key management, clarifying what is and is not covered by the results.
- Results depend on AES modes, padding, and implementation details, which are not documented. For example, SQL Server TDE, Oracle TDE, and MySQL InnoDB tablespace encryption may differ in mode, IV handling, and page encryption boundaries, which can influence I/O patterns and cache behavior.
- Normalize editions and release trains as much as possible, or isolate comparisons within each platform. If unavoidable, add an ablation table showing the performance delta that is attributable to edition change alone. Clearly justify the choice of MySQL 9.2 Enterprise beside 8.0 Community for encryption tests and discuss how this affects internal validity.
- The manuscript focuses on the 22 read queries but omits refresh streams, data maintenance, and index rebuilds, which are sensitive to TDE and are common in warehousing operations.
- The manuscript uses “Power” and “Throughput” runs but still reports counter-intuitive cases where encryption outperforms no-encryption at SF=0.1, which likely reflects cache and I/O noise rather than true cryptographic overhead. The resource counters used are also coarse.
- The manuscript presents averages and occasional standard deviations, but does not test significance across configurations, nor does it estimate effect sizes or control family-wise error across 22 queries and multiple factors.
- The manuscript relies on Windows Performance Monitor counters like “% Disk Time,” which are known to be misleading on modern storage stacks. VM-based hosts with shared storage can introduce noisy neighbors and scheduler variance.
- TDE key creation, rotation, backup encryption, and certificate handling are described narratively, but there is no performance assessment of key rotation, re-encryption operations, or backup/restore with TDE enabled.
- Cases where encrypted runs beat plaintext runs are attributed to “inefficient memory management” or small scale factor effects without concrete evidence.
- There are typographical and phrasing errors and some ambiguous claims, for example in the abstract and in related work summaries, that reduce readability. Perform a thorough language edit, fix typos in the abstract and introduction, and ensure every claim in the related work table maps to a verifiable source and a crisp takeaway. Include figure captions that specify environment details and run counts.
Author Response
We would like to thank the reviewer for his very helpful and detailed comments.
Comment 1 - We thank the reviewer for the suggestion. A detailed “Security Model and Scope” section is outside the focus of the present study, which is limited to evaluating AES-based TDE performance across the three DBMSs. This topic will be addressed in future work.
Comment 2 - Results depend on AES modes, padding, and implementation details, which are not documented. For example, SQL Server TDE, Oracle TDE, and MySQL InnoDB tablespace encryption may differ in mode, IV handling, and page encryption boundaries, which can influence I/O patterns and cache behavior.
Response 2 - We now explicitly describe the AES modes and implementation differences across Oracle, SQL Server, and MySQL TDE, acknowledging that mode, padding, and IV handling may influence cache and I/O behavior. “Each DBMS applies AES differently in its TDE implementation. SQL Server uses AES in CBC mode with a random initialization vector per page; Oracle TDE uses AES-CBC or AES-OFB depending on version and wallet configuration; MySQL InnoDB tablespace encryption uses AES-256 in CBC mode with per-page IVs and key rotation support. Padding and page-level encryption differences may influence I/O and caching behavior, which can contribute to variations in performance.” Lines(364-370)
Comment 3 - Normalize editions and release trains as much as possible, or isolate comparisons within each platform. If unavoidable, add an ablation table showing the performance delta that is attributable to edition change alone. Clearly justify the choice of MySQL 9.2 Enterprise beside 8.0 Community for encryption tests and discuss how this affects internal validity.
Response 3 - We thank the reviewer for highlighting edition consistency. Both MySQL 8.0 Community and 9.2 Enterprise support AES-based TDE; however, the Enterprise Edition employs a newer component-based keyring system with more advanced key management options. To assess any performance implications of this architectural difference, both editions were included. We have clarified this distinction in Section 3.2 and acknowledge that version differences may affect comparability, which is noted as a limitation.
Comment 4 - The manuscript focuses on the 22 read queries but omits refresh streams, data maintenance, and index rebuilds, which are sensitive to TDE and are common in warehousing operations.
Response 4 - We agree that refresh streams and maintenance operations are relevant for warehouse workloads. These were excluded to isolate read-query behavior but are identified as future work in the revised Limitations subsection. “Both MySQL 8.0 Community and 9.2 Enterprise editions support Transparent Data Encryption (TDE) using AES-256 through InnoDB tablespace encryption. The Community Edition relies on the keyring_file plugin for local key storage, while the Enterprise Edition introduces the component-based keyring architecture (component_keyring_file.dll) with improved key rotation and management integration. We included both editions to com-pare these implementations and assess whether the enhanced keyring management in the Enterprise Edition impacts performance.” Lines(295-301)
Comment 5 - The manuscript uses “Power” and “Throughput” runs but still reports counter-intuitive cases where encryption outperforms no-encryption at SF=0.1, which likely reflects cache and I/O noise rather than true cryptographic overhead. The resource counters used are also coarse.
Response 5 - We have added an explanation for cases where encryption appeared faster at scale factor 0.1, clarifying that this likely reflects cache and measurement noise rather than true cryptographic advantage. “At small scale factors such as scale factor 0.1, encryption sometimes appeared faster than non-encryption. These results are likely caused by caching effects and low I/O contention. Such cases do not represent genuine cryptographic acceleration but rather transient cache and I/O noise. Larger scale factors such as scale factor 1 and scale factor 10 exhibit the expected encryption overhead patterns.” Lines(667-672)
Comment 6 - The manuscript presents averages and occasional standard deviations, but does not test significance across configurations, nor does it estimate effect sizes or control family-wise error across 22 queries and multiple factors.
Response 6 - We appreciate the reviewer’s observation regarding the absence of statistical significance testing. The study was designed as a descriptive performance comparison focused on identifying relative trends rather than performing inferential statistics. To clarify this methodological scope, we added a paragraph at the end of Section 4 stating that the analysis is descriptive and does not include formal significance testing. We also indicate that future research may incorporate statistical analysis to quantify the strength of the observed differences. “The analysis presented in this paper is descriptive, based on averages and standard deviations to illustrate overall performance trends. Statistical significance across configurations was not formally assessed, as the focus was on comparative tendencies rather than inferential testing. Future research could include formal significance analysis to better quantify the strength of the observed differences.” Lines(707-712)
Comment 7 - The manuscript relies on Windows Performance Monitor counters like “% Disk Time,” which are known to be misleading on modern storage stacks. VM-based hosts with shared storage can introduce noisy neighbours and scheduler variance.
Response 7 - We thank the reviewer for pointing this out. We agree that Windows Performance Monitor counters can overstate utilization on multi-core or virtualized systems. Section 3.3 now explains that counters such as % Disk Time may occasionally exceed 100 % because of concurrent I/O operations and virtualization effects. “The Windows Performance Monitor counters used (e.g., % Processor Time, Available Mbytes, and % Disk Time) provide coarse estimates and can overstate utilization on multi-core, occasionally reporting values above 100% due to overlapping or aggregated I/O operations” Lines(315-318)
Comment 8 - TDE key creation, rotation, backup encryption, and certificate handling are described narratively, but there is no performance assessment of key rotation, re-encryption operations, or backup/restore with TDE enabled.
Response 8 - We thank the reviewer for this helpful suggestion. We acknowledge that these operations were described narratively but not benchmarked. Discussion of these untested aspects has been added under Future Work, which in this manuscript also serves to outline study limitations.
Comment 9 - Cases where encrypted runs beat plaintext runs are attributed to “inefficient memory management” or small scale factor effects without concrete evidence.
Response 9 - We thank the reviewer for this comment. The explanation for the cases where encrypted runs were faster has been revised. The manuscript now clarifies that these results are most likely caused by cache effects or measurement variability at very small scale factors, rather than any real performance benefit from encryption.
Comment 10 - There are typographical and phrasing errors and some ambiguous claims, for example in the abstract and in related work summaries, that reduce readability. Perform a thorough language edit, fix typos in the abstract and introduction, and ensure every claim in the related work table maps to a verifiable source and a crisp takeaway. Include figure captions that specify environment details and run counts.
Response 10 - We thank the reviewer for this detailed feedback. A full language revision was completed; typographical and phrasing errors were corrected, figure captions were standardized to include environment and run details, and all claims in the related-work table were verified against their cited sources.
Round 2
Reviewer 1 Report
Comments and Suggestions for Authors
The manuscript provides a clear comparative evaluation of AES encryption across three major DBMS platforms using the TPC-H benchmark. Hardware and virtualization details are now fully transparent, which strengthens reproducibility. Repository availability of scripts and configurations enhances replicability. English language editing has improved readability and clarity.
Dear authors, thank you very much for your reponses of my remarks/recommendations.
Review Report of the manuscript: Evaluation of the Impact of AES Encryption on Query Read Performance Across Oracle, MySQL, and SQL Server Databases
Comments 1, 2, 3, 4, 5, 7, 9, 11 are addressed.
Only Comments 6, 8 and 10 are not addressed.
Comment 6 (Results & Statistics): Partially addressed. Authors added descriptive statistics but did not perform formal significance testing. They acknowledged this limitation and deferred to future work.
Comment 8 (Anomalous Results Diagnostics): Deferred. Authors acknowledged importance but postponed due to scope/time constraints.
Comment 10 (Table Formatting & Visual Summaries ): Deferred. Authors acknowledged but postponed to future work.
Summary: Most reviewer requirements were satisfied. However, statistical significance testing, deeper diagnostic analysis, and improved visual summaries were acknowledged but deferred to future work. These omissions do not invalidate the study but limit its analytical depth.
Areas for Improvement:
Statistical Analysis: While descriptive statistics are useful, the absence of formal significance testing weakens the robustness of conclusions. Future work should prioritize ANOVA or non-parametric tests to quantify differences.
Diagnostics: Query plans, buffer-pool statistics, and I/O latency analysis would provide valuable insight into anomalous results. Even a limited case study would strengthen the current paper.
Visual Summaries: Tables remain dense. Normalized bar charts or line plots would make trends more accessible to readers.
Scope Limitations: The newly added limitations subsection is welcome, but claims in the conclusions should remain cautious given the deferred analyses.
Overall Evaluation: The study is scientifically sound and contributes meaningfully to understanding the performance impact of AES encryption in relational databases.
The deferred elements (statistics, diagnostics, visuals) are not fatal flaws but represent opportunities for improvement in future work.
The manuscript is suitable for publication once the authors make small adjustments to clarify the deferred analyses (e.g., emphasize descriptive scope in the abstract/conclusion, and possibly add one simple visual summary figure). No major methodological changes are required at this stage.
The English has been substantially improved. The manuscript is now clear and professional, though occasional phrasing could be further tightened. Overall, it is acceptable for publication.
Reviewer 2 Report
Comments and Suggestions for Authors
The authors have successfully addressed all my comments and concerns.
Reviewer 3 Report
Comments and Suggestions for Authors
The authors have satisfactorily modified their manuscript according to my previous criticisms. Therefore, I recommend the publication of this manuscript.
