You are currently viewing a new version of our website. To view the old version click .
by
  • Sejin Han1,2

Reviewer 1: Anonymous Reviewer 2: Anonymous

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

‘the FBI traced billions in Bitcoin from Silk Road [12], research shows Bitcoin’ – poorly constructed. ‘Legal experts note "there is no such thing as a GDPR- or CCPA-compliant blockchain today’ – what legal experts specifically? And why this quote? ‘The proposed Artificial Intelligence-enhanced Regulatory Proof-of-Compliance (AIR-PoC) framework addresses this gap through a two-phase consensus mechanism that integrates AI legal agents with semantic web technologies for autonomous regulatory compliance enforcement’ – You should compare this framework with others, explaining how it differentiates from them. ‘current approaches involving centralized controls, reliance on immutable smart contracts, and adoption of relationship-centric compliance models reveal fundamental incompatibility with blockchain’s transaction-based and pseudonymous architecture’ – specify some relevant supporting sources for the current approaches. ‘These systems fundamentally rely on pre-established data subject-controller relationships, requiring explicit identification of responsible parties before transactions occur’, ‘These systems operate on post-execution compliance verification, creating fundamental resource waste’, etc. – what systems? That appears out of the blue. Check also for such other instances: ‘These approaches exemplify the fundamental inadequacy of relationship-based compliance models in blockchain environments’. Etc. ‘Kassab and Ludwig [36] proposed OntoChain, a semantics-driven access control framework combining OWL ontologies with blockchain; their prototype was deployed on an Ethereum testnet to measure policy enforcement performance. Ferrucci et al. [37] introduced PolicyChain, which captures GDPR policy clauses in OWL ontologies and enforces them via Hyperledger Fabric chaincode with immutable audit logs. Conti et al. [38] developed Semantic Privacy-Aware Requirement Translation for Applications (SPARTA), a semantics-driven privacy framework that models Internet of Things (IoT) data trading policies in RDF/OWL and enforces them through Ethereum smart contracts. Rodríguez-Morales and Cortés-Rodríguez [39] presented BlockPOL, using a Platform for Privacy Preferences Project (P3P)-based ontology to represent cloud privacy policies and Hyperledger Fabric chaincode to automate access control decisions’ – include results for all the cited sources. The same also for: ‘Zafar (2025) analyzed blockchain-GDPR interactions through Worldcoin case studies, exploring Chameleon hashes and zero-knowledge proofs [40]’, ‘Sánchez-Obando et al. (2025) pioneered HErMIT reasoning engines for blockchain-AI semantic ontologies [41]’. ‘Proof-of-Stake validation’ – use only the acronym after the first full phrase instantiation. ‘Table ?? summarizes’ – what table? Too many (up to 3 lines) paragraphs, that is not properly idea consistency and correlation. Develop more on table analysis. ‘6. Discussion and Future Directions’ should incorporate comparisons with other research outcomes. Limitations and further research should be expanded. As the topic is hot, I would have expected that most cited sources be from the past two years. Did you really check that older claims are still valid?

Author Response

Comment 1: 'the FBI traced billions in Bitcoin from Silk Road [12], research shows Bitcoin' – poorly constructed.

Response: Thank you for pointing out the poor construction of this sentence. I have revised the text to improve clarity and readability by separating the compound sentence into distinct, complete sentences with proper supporting evidence and appropriate connecting phrases.

  • Original text: "the FBI traced billions in Bitcoin from Silk Road [12], research shows Bitcoin addresses can retroactively deanonymize users, and criminals combine social media intelligence with blockchain analysis for targeting victims. Additionally, the T3 Financial Crime Unit, a joint initiative by Tron, Tether, and TRM Labs, announced that it had frozen more than $250 million of criminal assets less than a year after starting up [13]."
  • Revised text: "Notable cases illustrate these vulnerabilities. For instance, the FBI successfully traced billions of dollars in Bitcoin originating from Silk Road transactions [12]. Research further demonstrates that Bitcoin addresses can be retroactively linked to user identities. Moreover, criminals increasingly leverage social media intelligence combined with blockchain analytics to identify and target victims. Additionally, the T3 Financial Crime Unit represents a joint initiative by Tron, Tether, and TRM Labs. This unit subsequently announced that it had frozen more than $250 million of criminal assets less than a year after starting operations [13]."

Manuscript Revision: I have revised from compound sentence structure to clear sequential statements with enhanced readability, proper citation formatting, and logical connecting phrases between sentences.( Introduction section, Lines 46-53)

 

Comment 2: 'Legal experts note "there is no such thing as a GDPR- or CCPA-compliant blockchain today' – what legal experts specifically? And why this quote?

Response: Thank you for this valuable comment. In the original version, I used the direct statement “there is no such thing as a GDPR- or CCPA-compliant blockchain today” according to EU Blockchain Observatory & Forum, “Blockchain and the GDPR” (2018). This report explicitly states: “Just like there is no GDPR-compliant Internet, or GDPR-compliant artificial intelligence algorithm, there is no such thing as a GDPR-compliant blockchain technology. There are only GDPR-compliant use cases and applications.”

However, I realized that such an absolute formulation—or a phrase that might be interpreted differently depending on the audience—may be misleading without proper references. To address this, I have revised the sentence to reflect the broader scholarly and legal consensus, emphasizing the fundamental incompatibilities between blockchain architectures and GDPR/CCPA requirements. I have also inserted authoritative references [arthurcox2023, edpb2025crossroads, haque2021gdpr, cmslaw2019, gomez2025gavin] to support this revised formulation. These cited authors and institutions are recognized legal experts and authorities in the field of data protection and blockchain regulation, which strengthens the reliability of the revised statement.c

Manuscript Revision: In the revised manuscript, I have modified the sentence in the Introduction to read as follows (Introduction, lines 64–67):

“However, legal experts and researchers have identified fundamental incompatibilities between current blockchain architectures and GDPR/CCPA requirements [arthurcox2023, edpb2025crossroads, haque2021gdpr, cmslaw2019, gomez2025gavin], particularly regarding data immutability, decentralization, and the challenges in assigning controllership…”

Additionally, added five new references to References section.

Comment 3: 'The proposed Artificial Intelligence-enhanced Regulatory Proof-of-Compliance (AIR-PoC) framework …' – You should compare this framework with others, explaining how it differentiates from them.

Response: Thank you for this valuable feedback. I have added comparative statements to differentiate AIRPoC from existing frameworks. These include architectural differentiation contrasting reactive application-layer solutions with our proactive consensus-level approach, and technical distinction explaining how pre-consensus validation differs from conventional smart contract mechanisms.

Manuscript Revision: Added differentiation content in Introduction section contrasting reactive vs proactive approaches and consensus-level vs application-layer validation (Introduction section, Lines 81-86 and 90-92).

Comment 4: 'current approaches involving centralized controls, reliance on immutable smart contracts, and adoption of relationship-centric compliance models reveal fundamental incompatibility with blockchain's transaction-based and pseudonymous architecture' – specify some relevant supporting sources for the current approaches.

Response: Thank you for highlighting this need for supporting evidence. I have added citations supporting each category of approaches mentioned. These include centralized controls (Azaria et al., Zhang & Xue, Yao et al., Merlec et al., EDPB), immutable smart contracts (multiple sources spanning 2016-2024), and relationship-centric compliance models (comprehensive coverage of institutional and organizational approaches).

Manuscript Revision: Added supporting citations for current approaches in Section 2 Related Work (Lines 114-116).

Comment 5: 'These systems fundamentally rely on pre-established data subject-controller relationships, requiring explicit identification of responsible parties before transactions occur', 'These systems operate on post-execution compliance verification, creating fundamental resource waste', etc. – what systems? That appears out of the blue.

Response: Thank you for pointing out these ambiguous pronoun references. I have revised all ambiguous pronoun references throughout Section 2. "These systems" was replaced with explicit system identification to eliminate confusion about which specific systems are being analyzed.

Manuscript Revision: All subsubsection headers in Section 2 revised with explicit system identification: "Third-party regulatory platforms like HDG," "Smart contract-based enforcement systems," "Provenance tracking architectures" (Lines 125, 139, 161).

Comment 6, 11, 12: Include results for all the cited sources. Comparative analysis for the Discussion section. Limitations and further research should be expanded. Check that older claims are still valid.

Response: Thank you for these comprehensive suggestions for improvement. I have thoroughly revised cited sources and made fundamental structural improvements.

  • Removed References: Kassab & Ludwig (OntoChain), Ferrucci et al. (PolicyChain), Conti et al. (SPARTA), Rodríguez-Morales (BlockPOL), Sánchez-Obando et al. - eliminated due to insufficient thematic alignment, absence of performance results, and access limitations preventing verification.
  • Newly Added References: Merlec et al. (2021) - smart contract-based dynamic consent management, Yao et al. (2021) - blockchain-based multi-agent EHR sharing, Tao et al. (2024) - knowledge graphs for BIM metadata compliance, Azgad-Tromer et al. (2023) - on-chain privacy and compliance.
  • Retained Reference: Zafar (2025) - blockchain and data protection law reconciliation research.

I restructured Section 2.4 into unified "Intelligent Compliance Enforcement Mechanisms" and enhanced Section 6 Discussion with comprehensive technical architecture comparison table, expanded limitations analysis, and specific future research directions.

Manuscript Revision: Section 2.4 reorganization with new references providing performance results (Lines 169-194). Section 6 expansion with systematic comparison across six intelligent compliance systems (Lines 527-561). Enhanced limitations analysis identifying five critical technical challenges (Lines 562, 576-603). Bibliography updates removing five problematic references and adding four recent publications (2021-2024).

Comment 7: 'Proof-of-Stake validation' – use only the acronym after the first full phrase instantiation.

Response: Thank you for this formatting correction. I have corrected terminology usage throughout the manuscript. The first occurrence establishes "Proof-of-Stake (PoS) validation" with consistent acronym usage in all subsequent instances.

Manuscript Revision: Terminology standardization establishing PoS acronym in Abstract with consistent usage throughout manuscript.

Comment 8: 'Table ?? summarizes' – what table?

Response: Thank you for catching this editorial oversight. The referenced table was originally intended to explain mathematical symbols but became redundant when symbol explanations were incorporated directly within definitions. This reference was an editorial oversight.

Manuscript Revision: Removed erroneous table reference; symbol explanations integrated directly within mathematical definitions.

Comment 9: Too many (up to 3 lines) paragraphs, that is not properly idea consistency and correlation.

Response: Thank you for this important observation about paragraph structure. I systematically revised the text by consolidating fragmented short paragraphs into substantial, coherent paragraphs. I restructured all major sections by eliminating bullet-point formats, integrating subsection descriptions into flowing narrative text, and using transitional phrases to create natural progression between ideas.

Manuscript Revision: Comprehensive restructuring across all sections including Introduction (consolidated framework descriptions), Related Work (eliminated subsubsection headers), System Model (converted bullet-points to narrative), Proposed Method (integrated AI subsystems), Experimental Results (merged subsubsections), Discussion (unified comparative analysis), and Conclusion (eliminated bullet-point contributions) with enhanced conceptual flow throughout manuscript.

Comment 10: Develop more on table analysis.

Response:. I thank the reviewer for this insightful suggestion to enhance our table analysis. This feedback has prompted me to significantly expand our analytical depth and provide more comprehensive interpretation of the experimental results. Additionally, following the statistical enhancement requested by another reviewer regarding standard deviations and variance indicators, I conducted 30 repetitions for each experimental scenario to establish statistical reliability, which resulted in updated numerical values throughout all tables while maintaining the same overall performance patterns and conclusions. The enhanced table analysis now provides deeper insights into the stability characteristics, performance scaling relationships, and compliance accuracy patterns that were previously under-interpreted.

Manuscript Revision:

Section 5.2 (Performance Evaluation and Results) - Lines 429-468:

  • Line 448-455: Added comprehensive stability analysis paragraph explaining exceptional consistency patterns, overhead ranges (4.8% to 4.6%), and superior AIRPoC stability characteristics (6.7% average CV vs 7.1%)
  • Line 437-447: Expanded Table 1 analysis with detailed stability metric interpretation, comparative performance discussion, and efficiency of proactive compliance filtering approach
  • Line 447: Enhanced Table 2 analysis with comprehensive statistical metrics explanation including standard deviations, confidence intervals, and stable efficiency scaling characteristics
  • Line 456-463: Added detailed Table 3 interpretation focusing on compliance accuracy breakdown by scenario type (90.4% GDPR compliant, 100% violation detection) and performance efficiency
  • Line 464-468: Integrated Table 4 analysis explaining concurrent load performance patterns, scalability benefits under high user concurrency, and 2.6% faster processing at maximum load
  • Throughout Section 5.2: Added cross-table comparative analysis linking stability metrics, overhead patterns, and performance scaling relationships
  • Section 5.2: Introduced quantitative trend analysis explaining performance characteristics across different experimental conditions and transaction volumes

Reviewer 2 Report

Comments and Suggestions for Authors
  • This paper introduces AIRPoC, a two-phase blockchain consensus framework that integrates AI-driven legal agents and semantic web technologies to enforce GDPR and AML compliance proactively, achieving 88.5% accuracy with minimal 5.2% performance overhead, thereby addressing blockchain’s regulatory challenges while preserving decentralization, efficiency, and scalability in digital economies.
  • In general, the paper is well-written and easy to follow. Also, the paper makes a great contribution in the field and solves an important problem. 
  • In the abstract, specify that the experiment you did is simulated rathar than real-world experiment.
  • Make sure you provide proper references in the introduction where you claim " there is no such thing as a GDPR- or CCPA-compliant blockchain today"
  •  
  • line 65, provide a reference for the current research stated here.
  • Section 2.6: Provide the refererred references where you mention there are some gaps. 
  • In your model, can you explain the relationship between the validator and the block integration in phase 2? (Referring to Figure 1 here)
  • In the experiment, how can you validate the AI-based metadata extraction algorithm? 
  • Since some components of your architecture are using LLM, can this lead to more hidden challenges like data, interpretability, reproducibility, and bias? Please discuss this issue.
  • In the experiment section: some tables lack standard deviations include variance to show stability.

Author Response

Comment 1: In the abstract, specify that the experiment you did is simulated rather than real-world experiment.

Response: I have revised the abstract to clearly specify that experiments were conducted in a simulation environment rather than real-world settings.

Manuscript Revision: Changed "Experimental evaluation" to "Simulation-based experimental evaluation" and added "in a controlled environment" (lines 18-19).

 

Comment 2: Make sure you provide proper references in the introduction where you claim "there is no such thing as a GDPR- or CCPA-compliant blockchain today"

Response:
Thank you for this valuable comment. In the original version, I used the direct statement “there is no such thing as a GDPR- or CCPA-compliant blockchain today” according to EU Blockchain Observatory & Forum, “Blockchain and the GDPR” (2018). This report explicitly states: “Just like there is no GDPR-compliant Internet, or GDPR-compliant artificial intelligence algorithm, there is no such thing as a GDPR-compliant blockchain technology. There are only GDPR-compliant use cases and applications.”

However, I realized that such an absolute formulation—or a phrase that might be interpreted differently depending on the audience—may be misleading without proper references. To address this, I have revised the sentence to reflect the broader scholarly and legal consensus, emphasizing the fundamental incompatibilities between blockchain architectures and GDPR/CCPA requirements. I have also inserted authoritative references [arthurcox2023, cmslaw2019, edpb2025crossroads, gomez2025gavin, haque2021gdpr]  to support this revised formulation. These cited authors and institutions are recognized legal experts and authorities in the field of data protection and blockchain regulation, which strengthens the reliability of the revised statement.

Manuscript Revision:

Original:
There is no such thing as a GDPR- or CCPA-compliant blockchain today.

Revised:
However, legal experts and researchers have identified fundamental incompatibilities between current blockchain architectures and GDPR/CCPA requirements [arthurcox2023, cmslaw2019, edpb2025crossroads, gomez2025gavin, haque2021gdpr], particularly with respect to data immutability, decentralization, and the difficulties in clearly assigning controllership (Introduction, lines 64–67).

 

Comment 3: line 65, provide a reference for the current research stated here.

Response: I have added appropriate references supporting the statement about current research limitations, covering both reactive auditing approaches (Chen et al. 2021, Zafar 2025, Christidis & Devetsikiotis 2016) and basic encryption/access control systems (Zhang et al. 2019, Azaria et al. 2016).

Manuscript Revision: Added supporting citations for reactive auditing and basic compliance approaches (lines 69-71).

 

Comment 4: Section 2.6: Provide the referred references where you mention there are some gaps.

Response: I have added specific references to support each identified gap: transaction-level legal responsibility (EDPB 2025, Zafar 2025), reactive auditing paradigm (Chen et al. 2021, Christidis & Devetsikiotis 2016, Zafar(2025)), and fragmented compliance approaches (Azaria et al. 2016, Zhang et al. 2018, Casino et al. 2019).

Manuscript Revision: Added citations to substantiate all three research gaps (Section related work/Research Gaps and Limitations, lines 200-206).

 

Comment 5: In your model, can you explain the relationship between the validator and the block integration in phase 2? (Referring to Figure 1 here)

Response: I have clarified the Phase 2 process sequence: block proposal by designated proposers, validator attestation for consensus-level verification, and block integration component aggregating attestations across slots until finality threshold is reached.

Manuscript Revision: Added detailed explanation of the epoch-slot structure, attestation aggregation process, and finality algorithm integration in Phase II section (Section Proposed Method, lines 391-403).

 

Comment 6: In the experiment, how can you validate the AI-based metadata extraction algorithm?

Response: Thank you for this important question regarding the validation of our AI-based metadata extraction algorithm. I validated the end-to-end system performance through comprehensive testing across 25,000 transactions with pre-classified ground truth scenarios, achieving 88.5% overall compliance accuracy as detailed in the experimental results section. However, I acknowledge that this approach does not isolate the metadata extraction component specifically. The current validation measures the combined performance of metadata extraction, query generation, and compliance determination rather than validating the metadata extraction algorithm independently.

Manuscript Revision: Added explicit limitation in Section 5.3 acknowledging that proper metadata extraction validation would require field-level ground truth annotation, component-specific accuracy metrics, and precision/recall calculations for individual metadata types. (section Experimental Results / Experimental Limitations and Constraints, lines 477-491)

 

Comment 7: Since some components of your architecture are using LLM, can this lead to more hidden challenges like data, interpretability, reproducibility, and bias? Please discuss this issue.

Response: Thank you for raising this critical concern about the potential risks associated with LLM integration in our architecture. I have added comprehensive discussion addressing these challenges. For data privacy, the current implementation includes three-layer anonymization (cryptographic hashing, GDPR categorization, location generalization) ensuring LLMs process only abstract categories rather than personal records. Regarding interpretability, while mechanisms such as decision logging with SPARQL queries, RDF-formatted audit trails, and human review protocols for low-confidence decisions are feasible within the current system architecture, they were not considered in this implementation. Similarly, advanced reproducibility protocols beyond basic deterministic prompts and comprehensive bias mitigation strategies remain as future implementation requirements. Additionally, significant challenges including LLM hallucinations, model drift, and computational overhead represent ongoing limitations that require future exploration of formal verification methods and hybrid architectures.

Manuscript Revision: Added detailed discussion of LLM challenges and mitigation strategies in Section 6 (section discussion and future direction, Lines 595-606, 632-637).

 

Comment 8: In the experiment section: some tables lack standard deviations include variance to show stability.

Response: I sincerely appreciate the reviewer's constructive feedback regarding the statistical rigor of our experimental tables. I recognized that my original methodology collected single-point measurements without proper variance indicators, which was insufficient for rigorous experimental validation. I redesigned the experiment protocol to include 30 repetitions for each of the 24 test scenarios, resulting in 720 total experimental runs. The repeated experiments revealed excellent measurement stability (CV < 8% across all scenarios), with processing overhead improving from the original 5.2% to a statistically validated mean of 4.5%±0.33. I incorporated comprehensive statistical frameworks including standard deviations, coefficient of variation analysis, 95% confidence intervals, and stability assessments throughout all performance tables.

Manuscript Revision:

Section 5.2 (Performance Evaluation and Results) - Lines 429-468:

  • Line 430-432: Added "with 30 repetitions per test scenario to establish statistical reliability. Standard deviations include variance to show stability" to methodology description
  • Table 1: Updated red-highlighted numerical values to mean±SD format (e.g., "5.40±0.33") and added "Stability (CV)" column with CV percentages
  • Line 437-442: Updated text to reference "4.5% processing time overhead" and added comprehensive stability analysis describing CV percentages
  • Table 2: Updated red-highlighted numerical values including Standard Deviation, CV(%), and 95% Confidence Intervals columns
  • Line 448-455: Added stability analysis paragraph explaining overhead consistency across transaction volumes
  • Table 3: Updated red-highlighted accuracy percentages and performance metrics
  • Table 4: Updated red-highlighted performance metrics (Avg Time, P95 Time, TPS columns)

Section 5.3 (Experimental Limitations) - Lines 469-509:

  • Line 477-491: Added detailed explanation of end-to-end vs component-level validation methodology and metadata extraction limitations
  • Line 492-499: Enhanced description specifying "stability analysis across 30 repetitions demonstrates excellent variability control (CV < 8% for all scenarios)" and scalability concerns
  • Line 502-505: Added statistical validation context referencing "superior consistency (average CV: 6.7%) compared to Basic PoS (average CV: 7.1%)"

Section 6 (Discussion) - Lines 510-564:

  • Line 528-529: Updated performance metrics to reference "88.9% compliance detection" and "4.5% overhead"
  • Line 558-559: Updated performance comparison to specify "4.5% overhead compared to standard PoS"

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

There are several instances of long source strings that make unclear the specific contribution.

Author Response

Comment: There are several instances of long source strings that make unclear the specific contribution.

Response: I sincerely thank the reviewer for this critical observation. Based on this comment, I identified three types of problematic instances where my specific contributions were obscured:

  • Type 1: Prior work and our contributions narratively intertwined - Long prose paragraphs mixing prior work limitations with our novel solutions, compounded by heavy citation clustering that reduces readability and makes it unclear where existing work ends and our contribution begins.
  • Type 2: Implementation details obscuring conceptual contributions - Technical descriptions that present system operations in implementation-oriented "source string" format, reading like technical documentation rather than scholarly analysis of novel contributions.
  • Type 3: Standard operations and novel techniques visually indistinguishable - Code listings and explanations that present both conventional operations and our innovations in identical format without visual or narrative separation.

Manuscript revision:

Type 1 Example: Related Work (Section 2, Lines 113-119)

  • Original problem: 180-word "long source string" with excessive citation clustering (three citation groups totaling 18+ references) that obscures the narrative distinction between prior work limitations and our contribution.
  • Original text: “While blockchain-privacy regulation integration research is actively progressing, current approaches involving centralized controls[18,26,29,42,44], reliance on immutable smart contracts[22,27–30,41–43], and adoption of relationship-centric compliance models[18,26,29,31,35,39,42–44] reveal fundamental incompatibility with blockchain’s transaction-based and pseudonymous architecture, necessitating a paradigmatic shift to dynamic, proactive transaction-level enforcement. This section analyzes these limitations and advocates for transaction-centric compliance mechanisms.”
  • Revision: Categorized prior work into three groups (centralized controls (e.g., [18,26]); smart contracts (e.g., [27,28]); relationship-centric models (e.g., [35,39])) + explicitly stated "the research gap that our approach"
  • Revised text: “Though research on blockchain privacy regulation is actively progressing, current approaches reveal a fundamental incompatibility with blockchain’s architecture. The incompatibility of prior approaches falls into three categories. First, centralized intermediaries conflict with decentralization (e.g., [18,26]). Second, smart contract rule checking is reactive, wastes resources, and cannot promptly reflect regulatory updates (e.g., [27,28]). Third, relationship-centric models struggle to assign responsibility under pseudonymity (e.g., [35,39]). This section analyzes these limitations in detail and identifies the research gap that our approach addresses.”

Type 2 Examples:

Example 2a: Listing 1 Explanation (Lines 316-317)

  • Original problem: A 120-word "long source string" presenting the process as a flat enumeration of steps without distinguishing standard ontology population from our novel automated reasoning contribution, making the "specific contribution" unclear.
  • Original text: “The process continues with integration of jurisdictional adequacy decisions and technical safeguard requirements into $I$ (e.g., adding specific regulatory data instances such as "US has no GDPR adequacy decision", "health data requires AES-256 encryption"), and definition and implementation of axioms and inference rules ($A$) for automated legal reasoning (e.g., defining inference rules such as "special category data + US transfer → additional protective measures required").”
  • Revision: Broke down the "long source string" into structured sentences that clearly articulate our "specific contribution": the automated derivation of compliance requirements through domain-independent axioms, rather than manual encoding of legal scenarios.
  • Revised text: "Listing 1 shows the RDF triple representation resulting from converting GDPR Article 9 into the O_GDPR ontology structure. This process continues with two key steps. First, we integrate jurisdictional adequacy decisions and technical safeguard requirements into the knowledge base I. Second, unlike existing approaches that require manual rule creation for each legal scenario, we define and implement domain-independent axioms and inference rules (A) that automatically derive compliance requirements from the knowledge base. For example, when our system encounters special category data being transferred to the US, it automatically infers that 'additional protective measures are required' by combining the axioms about data sensitivity, jurisdictional adequacy status, and transfer requirements—without requiring explicit encoding of this specific scenario."

Example 2b: Listing 2 (Lines 337-343)

  • Original problem: 7 RDF triples presented in uniform format with no visual distinction between standard blockchain extraction and our AI inference contributions.
  • Revision: Separated with inline comments (Lines 1-4: "Standard extraction from blockchain"; Lines 5-7: "Our AI-powered inference (Novel contribution)") + used aiInference: namespace prefix to mark our ontology extensions.Example 2c: Listing 2 Explanation (Lines 344-355).

Example 2c: Listing 2 Explanation (Lines 344-355)

  • Original problem: A 120-word "long source string" that mixes standard metadata extraction techniques with our novel AI inference contribution, making the "specific contribution" unclear.
  • Original text: “This example demonstrates that AI-based metadata extraction is implemented through explicit information extraction (extraction of directly verifiable information from transaction data such as health data inclusion, US destination, lack of encryption, use of Standard Contractual Clauses), contextual inference where the AI inference engine performs comprehensive analysis of individual elements to derive contextual information, intent classification involving inference of processing purpose based on transaction patterns and data types (e.g., classification as medical research purpose), and risk assessment through comprehensive evaluation of complex risk factors to determine compliance risk level (e.g., high risk assessment due to unencrypted international transfer of special category data). This AI inference process provides intelligent analytical capabilities that go beyond simple rule-based matching to interpret transaction meanings in regulatory contexts and proactively assess compliance risks.”
  • Revision: Transformed the implementation-oriented enumeration into scholarly prose by (1) replacing the "source string" format with a conceptually-focused explanation that highlights how our approach differs from existing methods, and (2) breaking down the complex sentence into shorter statements that clearly articulate our "specific contribution": a context-aware inference framework that interprets regulatory meaning beyond pattern matching.
  • Revised text: “Listing 2 separates two distinct layers. Lines 1–4 extract verifiable properties using standard parsing with OGDPR vocabulary. These properties include data type, destination, encryption, and transfer mechanism. Lines 5–7 show our novel contribution. AI inference derives processing purpose via AI classification. It also derives risk level via our assessment algorithm and legal basis via our reasoning engine. The aiInference: namespace marks our ontology extensions for AI-generated metadata. This dual-layer approach combines deterministic extraction with probabilistic inference. It enables transaction-level verification without predefined relationships.”

Type 3 Example: Query Reason Subsystem (Lines 357-379)

  • Original problem: 300-word "long source string" with nested clauses where general SPARQL query execution and our dynamic generation technique are described without distinction, obscuring the "specific contribution."
  • Revision: Added upfront contrastive statement ("Unlike rule-based systems... our AI dynamically generates") + converted to numbered list of three novel capabilities (dynamic query construction; semantic matching; integrated reasoning) + separated Transaction001 example into sequential steps clearly showing our system's operations.

Other revision: English Language Improvement (Total Corrections: 442)

I have engaged MDPI's Author Services for professional English editing. Based on their feedback, I have implemented 442 minor corrections throughout the manuscript, including grammar, punctuation, and stylistic improvements. The following shows the types and number of revisions by section:

Abstract (12): Grammar & verb forms (4), Word choice (3), Articles (2), Punctuation (2), Relative clauses (1)

Introduction (70): Punctuation (18), Word choice & prepositions (12), Sentence structure (9), Articles (8), Grammar & verb forms (7), Currency notation (6), Redundancy elimination (5), Capitalization (3)

Related Work (82): Capitalization consistency (16), Sentence structure (14), Punctuation (13), Word choice & prepositions (12), Articles (11), Grammar & verb forms (8), Hyphenation (6), Other (2)

System Model (54): Capitalization consistency (18), Articles (14), Sentence structure (8), Punctuation (7), Grammar & verb forms (5), Redundancy elimination (2)

Proposed Method (71): Punctuation (19), Grammar & verb forms (15), Articles (12), Word choice & prepositions (9), Sentence structure (8), Capitalization (5), Number style (2), Other (1)

Experimental Results (89): Capitalization consistency (21), Articles (18), Punctuation (15), Sentence structure (12), Grammar & verb forms (9), Number style (6), Word choice & prepositions (5), Hyphenation (3)

Discussion (42): Sentence structure/conciseness (12), Punctuation (10), Articles (8), Capitalization (5), Word choice & prepositions (4), Hyphenation (2), Redundancy elimination (1)

Conclusions (22): Articles (8), Grammar & verb forms (5), Word choice & prepositions (4), Punctuation (2), Sentence structure (2), Hyphenation (1)

Other Sections (10): Articles (6), Sentence structure (2), Capitalization (2)

Figures and Tables: All figures (2) regenerated at 300 DPI with enhanced readability (color-coding, larger fonts, repositioned legends). Tables (5) reformatted with consistent alignment, clear headers, Capitalization, and improved spacing.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

Thank you for addressing my comments. The paper is in a better shape now. 

Author Response

English Language Improvement (Total Corrections: 442)

I have engaged MDPI's Author Services for professional English editing. Based on their feedback, I have implemented 442 minor corrections throughout the manuscript, including grammar, punctuation, and stylistic improvements. The following shows the types and number of revisions by section:

Abstract (12): Grammar & verb forms (4), Word choice (3), Articles (2), Punctuation (2), Relative clauses (1)

Introduction (70): Punctuation (18), Word choice & prepositions (12), Sentence structure (9), Articles (8), Grammar & verb forms (7), Currency notation (6), Redundancy elimination (5), Capitalization (3)

Related Work (82): Capitalization consistency (16), Sentence structure (14), Punctuation (13), Word choice & prepositions (12), Articles (11), Grammar & verb forms (8), Hyphenation (6), Other (2)

System Model (54): Capitalization consistency (18), Articles (14), Sentence structure (8), Punctuation (7), Grammar & verb forms (5), Redundancy elimination (2)

Proposed Method (71): Punctuation (19), Grammar & verb forms (15), Articles (12), Word choice & prepositions (9), Sentence structure (8), Capitalization (5), Number style (2), Other (1)

Experimental Results (89): Capitalization consistency (21), Articles (18), Punctuation (15), Sentence structure (12), Grammar & verb forms (9), Number style (6), Word choice & prepositions (5), Hyphenation (3)

Discussion (42): Sentence structure/conciseness (12), Punctuation (10), Articles (8), Capitalization (5), Word choice & prepositions (4), Hyphenation (2), Redundancy elimination (1)

Conclusions (22): Articles (8), Grammar & verb forms (5), Word choice & prepositions (4), Punctuation (2), Sentence structure (2), Hyphenation (1)

Other Sections (10): Articles (6), Sentence structure (2), Capitalization (2)

Figures and Tables: All figures (2) regenerated at 300 DPI with enhanced readability (color-coding, larger fonts, repositioned legends). Tables (5) reformatted with consistent alignment, clear headers, Capitalization, and improved spacing.