Next Article in Journal
NLP Models for Military Terminology Analysis and Detection of Information Operations on Social Media
Previous Article in Journal
trustSense: Measuring Human Oversight Maturity for Trustworthy AI
Previous Article in Special Issue
Implementation of Ring Learning-with-Errors Encryption and Brakerski–Fan–Vercauteren Fully Homomorphic Encryption Using ChatGPT
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bidirectional Privacy Preservation in Web Services

1
School of Electrical and Computer Engineering (EECS), University of Ottawa, Ottawa, ON K1N 6N5, Canada
2
School of Engineering Design and Teaching Innovation (and EECS), University of Ottawa, Ottawa, ON K1N 6N5, Canada
*
Authors to whom correspondence should be addressed.
Computers 2025, 14(11), 484; https://doi.org/10.3390/computers14110484
Submission received: 1 October 2025 / Revised: 1 November 2025 / Accepted: 3 November 2025 / Published: 6 November 2025
(This article belongs to the Special Issue Emerging Trends in Network Security and Applied Cryptography)

Abstract

In web-based services, users are often required to submit personal data, which may be shared with third parties. Although privacy regulations mandate the disclosure of intended recipients in privacy policies, this does not fully alleviate users’ privacy concerns. The presence of a privacy policy does not ensure compliance, since users must assess the trustworthiness of all parties involved in data sharing. On the other hand, service providers want to minimize the costs associated with preserving user privacy. Indeed, service providers may have their own privacy preservation requirements, such as hiding the identities of third-party suppliers. We present a novel framework designed to tackle the dual challenges of bidirectional privacy preservation and cost-effectiveness. Our framework safeguards the privacy of service users, providers, and various layers of intermediaries in data-sharing environments, while also reducing the costs incurred by service providers related to data privacy. This combination makes our solution a practical choice for web services. We have implemented our solution and conducted a performance analysis to demonstrate its viability. Additionally, we prove its privacy and security within a Universal Composability (UC) framework.

1. Introduction

Web-service providers often need to process various types of personal data from the service users to deliver their services effectively. This processing is essential for purposes such as verifying access, personalizing experiences, facilitating advertising, and conducting research. The primary service provider may share this personal data with second-layer service providers, who can pass the personal data to third-layer providers and onward, continuing in chained fashion. As a result, personal data supplied by the service user can cascade through multiple layers, creating a tree-like structure that is challenging to trace.
To safeguard the privacy of service users, proposed regulations require service providers to disclose information about the next layer of individuals or companies with whom they intend to share collected private data [1]. In theory, this transparency allows service users to recursively examine the privacy policies of all parties involved, enabling them to make informed decisions.
For instance, using a travel management company requires travelers to provide personal information, such as the duration of their stay, their age, any medical conditions, passport numbers, email addresses, room preferences, and dietary requirements. The travel management company is responsible for the overall management of the trip, including such aspects as: transportation, accommodation, meals, and travel insurance. This is often carried out in partnership with entities like transportation providers, hotels, and insurance companies that serve as secondary service providers.
Furthermore, local food companies that supply meals to hotels might also act as third-layer service providers, necessitating the knowledge of dietary preferences. In this context, when planning a trip, travelers should begin by reviewing the privacy policy of the travel management company to identify the involved hotel, travel, and insurance providers. Next, the privacy policies of these second-layer service providers would need to be examined to uncover any potential third-layer service providers (in this example, the food company). The traveler must then go through the privacy policy of each identified party and then assess their individual trustworthiness.
However, in practice, for a service user, assessing the privacy implication through this recursive process is both complex and time-consuming. Additionally, assessing a service provider’s trustworthiness in relation to the alignment between declared privacy policies and actual practices poses a challenge. Furthermore, privacy policies may be updated after personal data has been collected, allowing service providers to operate on previously acquired data according to these new policies without detection.
In the other direction, service providers encounter significant expenses related to infrastructure, software migration, training, and, importantly, the ongoing audits [2] necessary to ensure regulatory compliance and avoid hefty fines. However, smaller organizations often struggle to navigate these complexities [3], along with the expertise and financial strain they impose [4,5].
This reality inadvertently gives larger enterprises a competitive edge in the market [6]. Once these advantages are secured, larger companies typically pass on these additional costs to consumers, resulting in higher product prices [7]. Even more concerning is that, despite these indirect costs, there is no guarantee of user privacy.
This is because passing an audit does not necessarily ensure ongoing enforcement of privacy policies. There have been instances where reputable organizations have still breached their own privacy policies, even after successfully completing an audit [8]. Therefore, we argue that the existing privacy practices do not fully address the concerns that arise in real-world situations.
Furthermore, service providers can also have privacy requirements. They have valid reasons for being hesitant to disclose the details or existence of the third parties, who are their suppliers or partners. Such information may be considered to be strategic and may even be classified as a trade secret [9,10]. Service providers may also wish to protect their clients’ identities from their suppliers, while forwarding the data. In our example, the travel company might want to conceal the client’s identity from the insurance provider to prevent the latter from reaching out directly to the traveler with subsidized offers.
In this research, we aim to address the practical privacy concerns faced by both service users and providers in a data-sharing environment. Our goal is to empower service users by providing enhanced control and simplicity in their data-sharing choices. We are committed to ensuring that service providers consistently uphold their declared privacy policies. At the same time, we are dedicated to preventing any unfair advantages from being granted to service users with malicious intentions. Furthermore, we ensure the protection of a service provider’s trade secrets by not mandating the disclosure of delegatee information.
Typically, the focus in designing privacy-enhancing technologies (PETs) has been on safeguarding service users from various privacy issues, with little consideration given to the challenges faced by service providers. Ironically, these providers often incur added costs and complexities when they implement PETs for the benefit of users. This situation raises an important practical question: Why should a service provider choose to adopt newly proposed PETs? Interestingly, this question has not been sufficiently explored in the existing literature on PETs.
The novelty of our framework is its focus on the costs and motivational factors affecting service providers. Our framework ensures key privacy requirements for service users while simultaneously lowering expenses and complexities for service providers. Additionally, for service providers, this framework introduces essential privacy features that are currently unattainable. We hope that this combination of bidirectional privacy preservation (Appendix A has more example scenarios where we need bidirectional privacy) and cost savings will motivate service providers to adopt our proposed framework in real-world applications.
The principal contributions of this paper are as follows:
  • Creation of a framework for Bidirectional Privacy-Preservation for Multi-Layer Data Sharing in Web Services (BPPM) which
    (a)
    Provides essential privacy guarantees to the service users.
    (b)
    For service providers, reduces the cost associated with privacy preservation.
    (c)
    Presents opportunities for service providers to achieve privacy benefits that were previously unattainable.
  • We prove the security and privacy properties of BPPM within a UC framework.
  • We implemented the framework and measured and compared its different performance metrics to demonstrate its practicality.
This paper is organized into several sections. An overview of BPPM is presented in Section 2. Section 3 deals with the details of our protocol design. In Section 4, the privacy and security aspects of BPPM are proved within a formal UC framework. Implementation details and related performance analysis are described in Section 5. In Section 6, the costs and benefits of BPPM are analyzed. Related works are reviewed in Section 7. Finally, Section 8 concludes this paper.

2. BPPM Overview

The service user (or the data owner) obtains services from the primary service provider, who relies on secondary sub-service providers. This process can continue recursively, creating a tree-like hierarchical structure of service providers (or the data users).
BPPM effectively obscures the visibility of the entire tree structure from any individual party involved. Each service provider is only aware of its immediate parent and child providers. Likewise, the primary service provider is the sole entity that knows the identity of the service user. This concealment feature allows service providers at any level to safeguard their trade secrets. In this paper, the terms service provider and data user are used interchangeably as are service user and data owner.
The j t h data user at the i t h layer is denoted as D U i , j . To deliver any requested sub-services to its parent node, D U i , j may require various individual data items (such as passport number or date of birth) from an upstream party, which will be processed in specific ways. D U i , j specifies these data-item requirements and the associated processing details in a document known as the Personal Data processing Statement ( P D S i , j ), which is included in its privacy policy.
Because of the trade secret protection feature, the data owner ( D O ) is unaware of any other data users besides the primary data user, D U 1 , 1 However, BPPM ensures that all data processing statements ( i , j P D S i , j ) remain accessible to the data owner. This transparency allows the data owner to assess the privacy implications of sharing their personal data and allows them to make a well-informed decision.
To achieve this, each data user collects P D S structures from the privacy policy documents of their child nodes. They then append their own P D S and send the consolidated information to their parent. In a recursive manner, the primary data user compiles the details for the entire tree in P D S 1 , 1 and places that P D S 1 , 1 in its own privacy policy document.
The data owner reviews this P D S 1 , 1 and provides consent for some or all of the data processing statements (i.e., P D S D O P D S 1 , 1 ). Subsequently, the data owner gathers the required personal data items in the structure, D S D O . Both P D S D O and D S D O are transmitted to D U 1 , 1 .
To ensure both authenticity and confidentiality, the data owner signs and then encrypts both P D S D O and D S D O before their transmission. Upon receipt, the primary data user processes D S D O locally. When necessary, relevant subsets can be passed on to child nodes. This forwarding can occur recursively, allowing personal data from the data owner to traverse the data usage tree.
BPPM facilitates seamless data sharing while simultaneously reinforcing data owners’ control over their personal information. Importantly, our protocol guarantees that personal data are never exposed in plaintext, even during their computation. This ensures that the personal data can never be leaked. Moreover, our protocol also ensures that only the data owner-approved processing can be performed on the data and nothing else.
In BPPM, we accomplish this by processing the data and consent pair within a trusted execution environment (TEE) located at each data user’s site. A TEE provides two key security assurances. First, a TEE ensures that only a known program is executed. Second, it provides a secure sandbox environment, an enclave, which maintains the confidentiality of the program’s data and state during computation (for further details, see Appendix B).
Executing code naively within a TEE environment is not a foolproof solution. There can be a mismatch—whether intentional or not—between what a data processing statement claims and the actual actions carried out by the executing code. To remove this risk, BPPM limits execution to only those code components that have received explicit approval from a privacy auditor.
We use some notations and acronyms in our paper, which are summarized in Table 1.

2.1. BPPM Workflow

Each data user, ( D U i , j ), begins by setting up an enclave (Stage 1 in Figure 1) with a predefined secure base code ( P r o g E c l v ) on a TEE-enabled platform. Next, the data user loads pre-audited, privacy-leakage-free additional code components that correspond to the personal data-processing statements outlined in P D S i , j into the newly created enclave. These code components are received from a trusted code provider ( C P ) (Stage 1.1).
The primary data user, D U 1 , 1 , completes its setup process (Stage 1 for D U i = 1 , j = 1 ) and aggregates the P D S structure of the entire data usage tree into P D S 1 , 1 . Following this, the data owner can review P D S 1 , 1 from the privacy policy document of the primary data user. The data owner may then choose to opt out of specific proposed data processing statements while accepting a subset, P D S D O . Subsequently, the data owner sends P D S D O , along with the relevant personal data items ( D S D O ), securely to the primary data user’s enclave (Stage 2 in Figure 1).
It is to be noted that, although P D S 1 , 1 contains data processing statements for the entire tree, only a subset of it, L P D S 1 , 1 , is processed by D U 1 , 1 , and the child nodes process the rest. After receiving the personal data and processing consent (in Stage 2), D U 1 , 1 may, when necessary, request its own enclave to compute results according to a specific processing statement, S ( P D S D O L P D S 1 , 1 ) (Stage 3 for D U i = 1 , j = 1 ).
Whenever necessary, D U 1 , 1 requests its enclave to forward the required subset, P D S f r w ( P D S D O P D S 2 , c ) to D U 2 , c children ( D U 1 , 1 ) , along with the corresponding subset of D S D O (Stage 4 for D U i = 1 , j = 1 ). After receiving, D U 2 , c may subsequently process or forward them to its own children. In this manner, the D O ’s personal data may propagate to D U i , j .
BPPM verifies the authenticity of the data owner’s consent by checking the signature on the P D S D O . However, a standard signature scheme falls short in this scenario because only part of the original consent is transmitted to a child node during the forwarding process, making it impossible for the child node to validate the signature.
While a redactable-signature scheme (RSS) [11] might seem like a viable option, it incurs substantial computational costs. Instead, we harness the capabilities of the TEE within BPPM to achieve results akin to a redactable signature scheme while sidestepping these associated expenses.

2.2. Privacy and Security Goals

BPPM works on a zero-trust assumption. Data users as well as data owners are permitted to act with complete malice. Both can observe, record, and replay messages, as well as deviate from established protocols. Even if data users must possess a TEE, they can still control (modify or view) everything outside the TEE, including the operating system, communication channels, RAM, and more. Data users may also send malicious inputs when invoking the enclave’s entry points; they can even disable their TEE platform to gain an advantage.
Consequently, our protocol must be equipped to handle all these scenarios and ensure the following security and privacy goals are met:
1.
D U i , j is required to provide a Guarantee of Privacy Preservation (see Section 3.1) to parent ( D U i , j ) . parent ( D U i , j ) must obtain this guarantee on behalf of the entire tree ( D U i , j ) , without learning any of the data users in children ( D U i , j ) . The sub-tree of the entire data usage tree, rooted at D U i , j , is denoted by tree ( D U i , j ) , while parent ( D U i , j ) refers to the parent node of D U i , j . Recursively, D U 1 , 1 must provide a guarantee of privacy preservation to the D O on behalf of the entire data usage tree ( D U 1 , 1 ) , without revealing any other members in tree ( D U 1 , 1 ) .
2.
D U i , j must only be aware of parent ( D U i , j ) as the source of the received data and children ( D U i , j ) as the next tier of data users. Moreover, D U i , j must remain unaware of the identity/existence of its siblings. Consequently, only D U 1 , 1 should be aware of the true end service user (or the D O ).
3.
A malicious D O must not be able to feed fake data (or replayed data) to gain access to the unauthorized services.
4.
The D O must be able to approve a single consolidated list, P D S D O , without learning which processing is carried out locally and which is outsourced. Hence, the distribution of the data processing statements between D U 1 , 1 (i.e., L P D S 1 , 1 ) and its children ( D U 1 , 1 ) (i.e., ( P D S 1 , 1 L P D S 1 , 1 ) ) must remain unknown to the D O .
5.
None other than the D U i , j and the C P must remain aware of L P D S i , j . The C P is considered trusted and does not share the details of L P D S i , j , which could reveal D U i , j ’s business strategy.
6.
While forwarding data to D U i , j , parent ( D U i , j ) must reduce the received P D S to P D S f w d and D S to D S f w d . P D S f w d contains only the D O agreed subset of personal data-processing statements required by tree ( D U i , j ) , and D S f w d is the set of associated personal data. This reduction ensures that an honest but curious D U i , j cannot gain access to any additional information or insights regarding the processing activities conducted by its parent or siblings.
7.
Although P D S f w d and D S f w d are the reduced versions of the originals sent by the D O , D U i , j needs assurance regarding the authenticity of them. In other words, BPPM should provide a guarantee to D U i , j that parent ( D U i , j ) cannot add or modify any content in P D S f w d or D S f w d , although content can be removed/redacted.
8.
A source code that respects privacy and complies with the stated data processing statement needs to be audited, but the process can be expensive and time-consuming. Thus, reusing pre-audited code in a controlled way is required while protecting against pirating such privacy-preserving source code.

3. Protocol Design Details

We now describe the detailed design of BPPM. First, we define what we mean by a guarantee of privacy preservation. Next, we show how a piracy-free code-sharing ecosystem can be constructed. The required data structures used in BPPM are then described, along with the details of our protocol.

3.1. Guarantee of Privacy Preservation

This guarantee encompasses three key aspects. First, personal data is never disclosed in plaintext; the untrusted data user can only access the results of computations performed on that data. This is accomplished by processing personal data within an enclave, which accepts encrypted data as input and produces plaintext output. Second, no other computations on personal data are allowed beyond those explicitly authorized by the D O . Our protocol ensures that the enclave verifies the D O ’s explicit permission for any requested processing. Lastly, the output of these computations does not compromise privacy as it is ensured that only audited code can execute on the D O ’s personal data.

3.2. Piracy-Free Source-Code Sharing

The privacy of the D O may be compromised if the code operating within the enclave inadvertently or deliberately leaks confidential information (e.g., a malicious code may reveal the decryption of the received inputs). Furthermore, the code executed within the enclave must align with the specified data processing statement. For instance, if the data processing statement indicates that the date of birth is utilized solely to verify age, the code should not use this information for any alternative purpose, such as sending promotional offers. These two aspects must be assessed during the auditing process (e.g., SOC audits) and validated by a trusted auditor. However, this process can be both time-consuming and costly [2]. Additionally, except in exceptional circumstances, only standard processing is typically required for personal data (e.g., using date of birth to verify age). Therefore, whenever possible, it may be advantageous to reuse pre-audited code.
Therefore, we assume the presence of a code provider, who develops source code that facilitates various standard operations on diverse personal data items and obtains them audited from a trusted auditor. The C P maintains a database of this pre-audited code, which can be purchased by various users ( D U i , j ) as needed. However, there is a risk that a malicious user may access the code in plaintext, allowing them to steal intellectual property or secretly distribute the code to others. Thus piracy protection is required.
BPPM leverages the presence of the TEE on each data user’s location to protect against piracy as well. Instead of sending the plaintext code, the code is encrypted using the public key specific to the recipient’s enclave. Thus, only the destined and trusted enclave, with access to the corresponding secret key, can recover and execute the code.
However, a pre-existing code might not always be available for every required data usage statement ( S L P D S i , j ). In such cases, D U i , j would indeed need to develop the corresponding code and subject it to auditing before use. In this case, D U i , j acts as its own C P . While this approach may be more expensive, D U i , j can update the availability of the newly developed code through a public bulletin board and become a code provider. Other data users seeking similar functionality can now purchase this newly developed and audited code, allowing D U i , j to recover some incurred costs (i.e., to offset expenses related to development and auditing).
Indeed, different parties (e.g., other data users or even dedicated software developers) could play the role of a C P . The entire code-reuse scenario is depicted in Figure 2. It is to be noted that playing the role of a C P may provide some financial benefit for D U i , j , though it is not a mandatory requirement for BPPM. Indeed, a code developer might not want others to use their code, either inside or outside of a black-box environment (e.g., if the developed code gives the developer a strategic or commercial advantage).

3.3. Data Structures

Iyilade et al. designed “Purpose-to-Use” (P2U) [12], a privacy policy specification language for secondary data-sharing. Inspired by this work, we designed two simple XML-like data structures (Figure 3) that can transfer data and the accompanying data processing consent in BPPM. Unlike P2U, our data structures do not require the identity of any data user to be revealed while still providing privacy guarantees to the data owners as well as to the data users. Our structures also ensure the authenticity of the data and consent.

3.3.1. Personal Data Structure (D)

The D O provides their personal data through the personal Data structure, D, but in encrypted format. Only the enclave can decrypt and access its elements. The array D . D S [ ] contains the actual Data Set, comprising a collection of data items (e.g., D . D S [ ] = { passport - number , country of origin , date of birth } ). To manage the processing consent, the D O generates a new key pair ( p k c o n , s k c o n ). The secret key s k c o n is utilized to sign the agreed-upon list of data processing statements, and this Consent Signing secret Key is stored in D . C S K . During the forwarding phase, a trusted enclave can use this secret key to create a signature on the reduced version of the original consent.
This ephemeral key pair alone does not guarantee authenticity. Therefore, the D O also possesses a long-term key pair ( p k D O , s k D O ) and signs the ephemeral key with s k D O . A PKI-verifiable public key certificate, C e r t D O (stored in D . D O C ), certifies p k D O .
Since C e r t D O is not disclosed outside the enclave, the identity of the D O , which is also included within C e r t D O , remains protected. Finally, D . D O S holds the Data Owner’s Signature (generated with s k D O ) on the concatenation of D . D S [ ] and p k c o n . The enclave, having access to D . D O C , is able to verify the authenticity of D . D O S .

3.3.2. Personal Data Processing Consent (PC)

The entire P C structure acts as the details of the D O ’s personal data Processing Consent. It contains the details regarding the D O ’s agreed-upon list of processing statements (referred to as P D S [ ] and described in greater detail in Section 3.3.3) along with additional elements. Data users may access this structure in plaintext format, outside of their enclave environment. P C . D H contains the plaintext D’s Hash. The Signature on the Consent, P C . S C , stores the signature on the concatenation of P D S [ ] and D H . This signature is generated using D . C S K and can be verified with the consent Verification Key ( p k c o n ), which is outlined in P C . V K .

3.3.3. Personal Data Processing Details Set (PDS)

The personal data Processing Details Set is a sub-structure of P C . It includes a list of Processing Details ( P D ), which comprises three sub-elements: P D . S , P D . C H , and P D . A S . P D . S contains the data processing Statement in a human-readable format. It contains the details of what personal data item is required, why it is required, how it will be processed, and what will happen if the service user does not consent to this (e.g., if the service user opts out from providing their date of birth, then the senior citizens’ discount cannot be claimed). P D . C H holds the hash value of the corresponding audited source code, and the Auditor’s Signature is stored in P D . A S . During signing, the auditor combines P D . C H and P D . S to ensure that processing can be performed solely for the stated purpose.

3.4. Detailed Protocol

A portion of the BPPM protocol is executed through a secure enclave program, while other parties handle the remaining aspects. Figure 4 presents a formal description of the protocol executed by the participating parties ( P r o t B P P M ), while Figure 5 illustrates the enclave program ( P r o g E c l v ). These representations are grounded in the established ideal functionality of attested execution, denoted as G a t t  [13].
The protocol ( P r o t B P P M ) interacts with the enclave program ( P r o g E c l v ) at specific entry-points, by issuing a “resume” command to G a t t (i.e., lines 5, 10, 13, 16, and 19 in Figure 4). In P r o t B P P M , the D O , all data users ( i , j D U i , j ) and the C P each have distinct roles. The entry points are coded using colors: green entry points may only be invoked once, with any subsequent attempts resulting in a failure (⊥), while cyan entry points can be invoked multiple times. The shaded boxes represent the internal machines in P r o t B P P M and are not exposed to the external environment, Z . As a result, these entry-points can only be controlled indirectly. In contrast, all other entry points of P r o t B P P M are directly accessible by Z .
BPPM utilizes two arrays, C [ ] and E X [ ] , both indexed by the personal data processing statements, S. The C [ S ] array contains the plaintext source code corresponding to each data processing statement, S (represented as a string variable). There is a cost involved in producing C [ S ] and auditing it, which may necessitate that data users compensate the C P for using C [ S ] . The E X [ ] array maintains a list of data users who have been granted permission to execute the code C [ S ] .
Two different EUF-CMA-secure digital signature schemes are used in BPPM, one is Σ ( K G e n , S i g , V f ) , which is used for verifying the authenticity of D . D O S , P D . A S , and P C . S C . Σ T E E ( K G e n , S i g , V f ) is another signature scheme used by TEEs to attest to the legitimacy of the output of the TEEs in question. An IND-CPA-secure asymmetric encryption scheme A E ( K G e n , E n c , D e c ) is used to communicate with the enclave securely. A second pre-image resistant hash function (H) is used to produce the digest for C [ S ] and D.

3.4.1. Setup Phase

Initially, each data user enters the setup phase. During this phase, the data user initializes a new enclave and launches it to execute the trusted base code, P r o g E c l v . This base code securely generates an asymmetric key pair and discloses the public portion of the key. Following this, the code provider sends the pre-audited code components directly to the enclave, enhancing its processing capabilities.

3.4.2. SendOrgData Phase

During the SendOrgData phase, the data owner securely transmits their personal data along with the approved processing consent to the primary data user’s enclave. Upon receipt, the enclave verifies the authenticity of this information and stores the personal data in an encrypted and sealed format [14] for future use. However, the enclave does disclose the specifics of the approved processing consent, allowing the data user to clearly understand the services that can be offered based on those consents and the business logic.

3.4.3. Process Phase

When the data user intends to process the received personal data, they request to their enclave for specific processing. Upon receiving the request, the enclave first verifies whether the data owner granted the permission for that processing. If the permissions are confirmed, the enclave executes the processing code on the private data within a secure sandbox environment and subsequently reveals the computation result to the data user.
Given that TEEs can offer stateful obfuscation [13], our solution is also capable of delivering differential privacy [15]. To achieve this, the enclave may optionally introduce some noise before outputting the final result. Additionally, the enclave may track the privacy budget, and once this budget is depleted, it can refrain from disclosing the result.

3.4.4. Forward Phase

When a data user intends to share a portion of the personal data they have received, they submit a request to its enclave. In this request, the data user specifies which processing capabilities should be removed during the forwarding process to prevent the child node from gaining any undue processing advantages. In response, the enclave prepares the personal data with the reduced processing consent. Subsequently, on behalf of the data owner, the enclave securely generates a signature for this new consent, enabling the receiving enclave to verify the trustworthiness of any forwarded information.
For even further step-by-step details of our protocol, the readers may refer to Appendix C.

4. Privacy and Security Proof

The proof of the privacy and security properties of BPPM unfolds in multiple stages. Initially, the threat model and necessary assumptions are addressed. Subsequently, the ideal functionality for BPPM, F B P P M , is defined. Finally, F B P P M is utilized to demonstrate BPPM’s privacy and security within a UC framework.

4.1. Assumptions and Threat Model

Our threat model assumes the presence of a Byzantine adversary. In the context of BPPM, this implies that data owner and data users—at any layer—may engage in malicious behavior. The sole requirement for data users in BPPM is that they must possess a TEE-enabled platform, whether locally or in the cloud. BPPM can detect, whether a data user is using TEE or not, if not then excludes that data user from the protocol. We assume C P is honest and keeps the details of the made contract ( L P D S i , j ) with D U i , j confidential.
BPPM utilizes typical (and insecure) public communication channels (e.g., a conventional TCP channel over the internet) for communication. Consequently, BPPM addresses the resulting concerns by employing required strategies (e.g., using an IND-CPA-secure encryption scheme) within the protocol execution. However, if enhanced security/privacy is needed (e.g., the D O ’s identity is required to be hidden from D U 1 , 1 ), BPPM supports alternative anonymous communication channels (e.g., TOR and Mixnet) as add-ons without requiring any modifications in our protocol. In this paper, we do not consider denial-of-service (DOS) attacks as part of the adversary’s capabilities.
BPPM requires the D O to possess a long-term, PKI-verifiable public-key certificate, C e r t D O . The D O is assumed not to share the corresponding secret key, s k D O , with anyone. Auditing source code for security and privacy vulnerabilities is not a part of BPPM. However, BPPM verifies the auditor’s signature before executing any code. It is assumed that the C P will possess all the audited code ( C [ ] ) before taking part in BPPM.
In designing BPPM, we took into account the fact that support for TEEs on client-side platforms, such as desktop processors, has either slowed down or come to a standstill [16]. Our solution only necessitates TEE functionality on the server side. We believe this requirement is quite reasonable, considering the industry is poised to invest heavily in this technology [17]. This trend is reflected in the ongoing efforts to standardize TEE-based infrastructures and protocols [18].
We acknowledge that, in some situations, side-channel attacks have previously been identified as vulnerabilities for TEEs. Nevertheless, this technology is continuously evolving, and substantial advancements have been made to enhance the security of TEEs [19]. In this work, we assume that TEE hardware securely upholds its security properties.
Specifically, once execution starts, the contents of the enclave become inaccessible, and the internal code remains unchanged. Beyond this, we do not make any assumptions about security. For instance, the adversary still has the ability to control or alter everything outside the enclave, such as the operating system, communication channels, and RAM content. They can also send in malicious inputs when calling the enclave’s entry points. In designing the BPPM protocol, we considered the possibility that an adversary might try to disable or bypass the TEE, and we designed our protocol to effectively tackle these situations.

4.2. Ideal Functionality

Based on the desired privacy and security goals (Section 2.2) and the specified threat model (Section 4.1), we define an ideal functionality, F B P P M (Figure 6). As mentioned earlier, it has access to C [ ] and E X [ ] . As a fundamental construct of a UC framework [20], the caller of F B P P M is either an honest party or the simulator, Sim . If F B P P M sends a message, it first goes to the adversary, A . If A allows, then the message is forwarded on to the proper recipient. Since BPPM uses typical insecure communication channels, the identity of the sender/receiver and the message remain visible to A . The following sections describe the design of each entry point of F B P P M .

4.2.1. Setup Entry Point of F B P P M

In the real world, during the setup call, the fact that D U i , j is currently executing setup is revealed to A due to the observability of the multiple SendCode calls directed to the C P . To capture this, F B P P M notifies A the 3-tuple (setup, D U i , j , P D S i , j [ ] ). F B P P M then initializes the required storage area specific to the invoking data user and stores the input P D S i , j [ ] . This will be used later during the data processing and forwarding phase. Since in the real world, D U i , j publishes P D S i , j [ ] at the end of the setup call, F B P P M reveals P D S i , j [ ] as a public output. Notably, the C P does not take part in F B P P M as it serves solely as an internal machine within P r o t B P P M .

4.2.2. SendOrigData Entry Point of F B P P M

After invocation of the SendOrigData entry point, F B P P M prepares D and P C in a manner that mirrors the operations of the D O in the real world. In the real world, the D O executes the remote call (SaveData…), which allows A to infer that the D O is now engaged in the SendOrigData phase.
Hence, F B P P M notifies a 5-tuple consisting of (SendOrigData, D c t , P C c t , D O , D U 1 , 1 ) to A . F B P P M uses the input e p k , while creating the ciphertexts, D c t and P C c t . If A permits, F B P P M then carries out the subsequent operations and stores D and P C for future use. Finally, F B P P M discloses the plaintext version of P C and the newly generated D i d to A .

4.2.3. ProcessData Entry Point of F B P P M

During the data processing, F B P P M first ensures that the D U i , j has the required code execution permission privileges. Furthermore, it ensures that the consent structure corresponding to D i d allows the processing of S. If both the conditions are met, the corresponding processing is performed on the personal data, and the computation result is returned.

4.2.4. ForwardData Entry Point of F B P P M

While forwarding, F B P P M first retrieves D and P C corresponding to the requested D i d and, by following the steps of P r o t B P P M , prepares the forwarded version of the consent, F P C . After that, to correspond with the real world, F B P P M sends the following 5-tuple (ForwardData, D c t , F P C c t , D U i , j , c h i l d ) to A . If A allows the communication, F B P P M performs its normal operation, creates a new D i d for the c h i l d and stores the D and F P C within the storage specific to the child. Finally, F B P P M outputs the plaintext version of FPC and the newly generated D i d to the c h i l d .

4.3. UC-Proof

Theorem 1.
If Σ T E E and Σ are EU-CMA-secure (randomized) signature schemes, H is second-preimage-resistant, and A E is IND-CPA-secure, then P r o t B P P M “UC-realizes” F B P P M in the G a t t -hybrid model.
Proof. 
Let Z be an environment and A be a dummy adversary [20] who simply relays messages between Z and other parties. To show that P r o t B P P M “UC-realizes” F B P P M , we must show that there exists a simulator, Sim , such that for any environment, all interactions between P r o t B P P M and A are indistinguishable from all interactions between F B P P M and the Sim construct. In other words, the following must be satisfied:
Z , E X E C P r o t B P P M , A , Z E X E C F B P P M , Sim , Z
F B P P M takes part solely in the ideal world. In the ideal world, an honest party does not engage in any actual work. Instead, after receiving inputs from Z , it communicates with F B P P M . Sim can also communicate with F B P P M . To achieve indistinguishability between the ideal and real world, Sim emulates the transcripts of the real world using information obtained from F B P P M .
In the case of corrupted parties, Sim extracts their inputs and interacts with F B P P M . Once F B P P M provides a response, Sim emulates the transcripts for the corrupted party, which remain indistinguishable from the real world. Here, the term corrupted party refers to the party under the adversary’s control. Unless otherwise specified, any communication between Z and A or between A and G a t t is simply forwarded by Sim .
In this proof, we demonstrate the existence and design of Sim and establish that its interactions are indistinguishable from the real world across the four fundamental design stages previously listed: setup, SendOrigData, ProcessData, and ForwardData. We adhere to the general strategies of UC security proofs found in existing TEE-based solutions [13,21,22] while designing the simulator for BPPM. The following sections detail the design of the simulator and present the argument for indistinguishability, covering all possible combinations of corrupt and honest parties. □

4.3.1. Sim Design—Setup

Our threat model assumes that C P is honest, but the D U i , j can be either honest or malicious. Hence, there are two possible combinations.
A. When D U i , j is honest: Sim generates a key pair ( p k T E E s i m , s k T E E s i m ) for Σ T E E and publishes p k T E E s i m . When the Sim obtains a notification (setup, D U i , j , P D S i , j [ ] ) from F B P P M , it emulates the real-world interaction between D U i , j and C P . To do that
1.
Sim chooses random e i d s i m , e p k s i m and generates σ T E E ( e p k s i m ) with s k T E E s i m .
2.
Sim parses the received P D S i , j [ ] from F B P P M and generates | P D S i , j [ ] | network messages of the format (SendCode, P D . S , e i d s i m , e p k s i m , σ T E E ( e p k s i m ) ).
3.
Only after receiving | P D S i , j [ ] | replies from C P , Sim instructs F B P P M to continue. Note: Since Sim cannot prove its identity as D U i , j , C P will reply with an encrypted dummy code in all the responses.
Hybrid 1 works similarly to the real-world scenario, except that e i d is replaced with a random value and e p k with the public part of a randomly generated key pair. Since an enclave also generates e i d and e p k in the same random fashion, Hybrid 1 is indistinguishable from the real world.
Hybrid 2 is similar to Hybrid 1, except that σ T E E ( e p k ) is replaced with σ T E E s i m ( e p k s i m ) . Since Σ T E E is EU-CMA-secure, Hybrid 2 is indistinguishable from Hybrid 1.
Hybrid 3 is similar to Hybrid 2, except that all the replies from C P are replaced with encryption of dummy messages of the same length. Since A E is IND-CPA secure, Hybrid 3 is indistinguishable from Hybrid 2. Notice that this Hybrid 3 is exactly same as the ideal world.
B. When D U i , j is corrupt: In this case, nothing will be performed by the ideal functionality. Hence, there is nothing to simulate in this case.

4.3.2. Sim Design—SendOrigData

A. When both D O and D U 1 , 1 are honest: As both parties are honest, A can only see and tamper with the network messages, which are all that Sim needs to emulate. To do that
1.
Sim starts after F B P P M notifies with (SendOrigData, D c t , P C c t , D O , D U 1 , 1 ).
2.
Sim forwards that to A .
3.
If A does not tamper with the network communication, then Sim instructs F B P P M to continue normally. On the other hand, if A tampers with the communication, then in the real world, the enclave identifies these and aborts. Hence, our designed Sim also returns an abort signal in that case.
In the real world, the enclave can detect any alteration of D c t or P C c t and notifies with abort. Step 3 of the Sim ensures the same behavior in the ideal world as well. Hence, the real world and ideal world remain indistinguishable, without requiring any further hybrid arguments.
B. When D O is honest but D U 1 , 1 is corrupted: For this case, Sim mainly records A ’s messages and faithfully emulates G a t t ’s behavior. Hence
1.
Sim starts after receiving notification from F B P P M and intercepts the communication between A and G a t t .
2.
If A alters the ciphertexts ( D c t , P C c t ) while calling the SaveData entry point of G a t t , Sim returns abort.
3.
Otherwise, Sim instructs F B P P M to continue normally. In this case, Sim delivers the F B P P M -returned P C and d i d to A (instead of G a t t -returned replies).
4.
Moreover, by utilizing the equivocation method [13], Sim produces a valid attestation for P C and d i d and delivers them to A .
Hybrid 1 works similarly to the real-world scenario, except that the real attestation is replaced with the attestation obtained using the equivocation method. Since Σ T E E is EU-CMA-secure, Hybrid 1 remains indistinguishable from the real world. This Hybrid 1 is same as the ideal world.
C. When D O is corrupted but D U 1 , 1 is honest: Since, by the end of this phase, D O learns nothing, there is nothing to simulate for A . Sim only ensures that the effect of A ’s possible malicious action remains indistinguishable in both the ideal and real world. Hence,
1.
Sim observes what the D O performs before executing the remote S a v e D a t a ( ) call.
2.
If D O faithfully follows P r o t B P P M , Sim calls F B P P M with the actual inputs of D O .
3.
When F B P P M notifies back, Sim instructs F B P P M to continue normally.
4.
On the other hand, if D O deviates from P r o t B P P M , Sim aborts without calling F B P P M .
  • The designed Sim ensures that the effect of A ’s actions remains the same for the honest party as in the real world.
D. When both D O and D U 1 , 1 are corrupt: As both the parties are corrupt, the ideal functionality will not be used by either of them, requiring no simulation.

4.3.3. Sim Design—Process

Since only D U i , j is involved in this phase, it could have only two combinations:
A. When D U i , j is honest: In this case, D U i , j obtains proper output by consulting F B P P M . Since all the steps of this phase are executed locally, the external A cannot know that D U i , j has contacted F B P P M instead of executing the actual protocol, P r o t B P P M . Therefore, A will not notice any difference. Thus, the situation is already indistinguishable.
B. When D U i , j is corrupt:Since D U i , j is corrupt, F B P P M will not be used. Thus there is nothing to simulate in this case.

4.3.4. Sim Design—ForwardData

Four combinations are possible due to the involvement of two different parties.
A. When both D U i , j and the child are honest: Since both the parties are honest, Sim only emulates the network transcripts. The simulation steps remain similar to the steps mentioned in Case A of Section 4.3.2. Specifically,
1.
Sim starts after F B P P M notifies with (ForwardData, D c t , F P C c t , D U i , j , c h i l d ).
2.
If A does not tamper with the network communication, then Sim instructs F B P P M to continue normally. Otherwise Sim aborts, without invoking F B P P M .
Step 2 of the Sim ensures the indistinguishability between the real and ideal world.
B. When D U i , j is honest but the child is corrupt: In this context, the c h i l d ’s role is analogous to D U 1 , 1 of Case B of Section 4.3.2. Therefore, Sim acts in an identical manner, with only one distinction. Sim starts once it obtains the notification (ForwardData, …) from F B P P M , rather than (SendOrigData, …).
C. When D U i , j is corrupt but the child is honest: Here, Sim mainly records A ’s messages and faithfully emulates G a t t ’s behavior. To do so,
1.
Sim intercepts the communication between A and G a t t .
2.
If A invokes the PrepFwd entry point of G a t t , Sim intercepts that call and invokes the ForwardData entry point of F B P P M with those parameters.
3.
After this, F B P P M notifies back with (ForwardData, D c t , F P C c t , D U i , j , c i d ).
4.
Then, by utilizing the equivocation, Sim generates the attestation on D c t , F P C c t .
5.
Sim delivers F B P P M -returned D c t and F P C c t to A , along with the generated attestation.
6.
Then Sim notices A ’s subsequent actions.
7.
If A issues a S a v e D a t a -remote call, Sim verifies whether passed parameters in that remote call match the delivered values in step 5 or not. If they match, Sim instructs F B P P M to continue normally by replying “OK”.
8.
On the other hand, if Sim observes that A inputs something different in the remote S a v e D a t a call, then Sim returns “NOT-OK” to F B P P M . In that case, F B P P M does not deliver anything to the c h i l d .
Hybrid 1 works similarly to the real-world scenario, except that the real attestation is replaced with the attestation obtained using the equivocation method. Since Σ T E E is EU-CMA-secure, Hybrid 1 remains indistinguishable from the real world, and this Hybrid 1 is same as the ideal world.
D. When both D U i , j and the child are corrupt: As both parties are corrupt, the ideal functionality will not be used here, requiring no simulation.

5. Implementation Details and Performance Analysis

We now describe the critical details of our BPPM implementation. Following this, we will compare the performance of BPPM with that of a non-privacy-preserving scenario, focusing on computation time and communication bandwidth.

5.1. Implementation Details

We implemented a basic proof of concept of BPPM in C-language, and it is available online [23]. Intel-SGX is used as the underlying TEE technology because of its wide availability, but other TEE technologies are also compatible with BPPM. Our framework is generic and does not use any special feature of Intel-SGX.
We used the Ubuntu 20.04 Operating System for all our development and experimentation tasks. To prepare the enclave-side code ( P r o g E c l v ), we employ the Gramine shim library [24,25], which comprises approximately 2.1 KLOC. For cryptographic primitives and network operations, we rely on the MBed-TLS library [26].
To improve efficiency, instead of encrypting the entire data using the recipient enclave’s public key, we use hybrid encryption (i.e., the sender generates a fresh symmetric key to encrypt the plaintext and that symmetric key is encrypted with the recipient’s public key) to send data or code confidentially to an enclave. Specifically, we use an attested-TLS channel [27]. As with a normal TLS channel, attested-TLS provides both confidentiality and integrity of the network message. Furthermore, it guarantees that the server side of the established TLS-channel is an enclave, executing a specific source code on a genuine TEE platform.
During the setup phase, it is essential to enhance the functionality of the data user’s base enclave. Instead of the approach taken by other researchers, which involves sending a new enclave [28], we propose transferring only the necessary code components in the form of encrypted dynamic libraries. This method not only reduces storage requirements but also shortens network transfer time. Additionally, it eliminates the overhead associated with launching a new enclave. For instance, a standard 1088 KB enclave can be substituted with a binary of a dynamic library with a size of 16 KB. The loading of this dynamic library requires only ∼1.17 ms, in contrast to the ∼15.4 ms needed for enclave initialization, plus an additional ∼1.1 ms for local attestation of that new enclave.
Another advantage is that, to save space in the base enclave, these libraries can be unloaded and stored on permanent media when not in use. This approach helps address the enclave’s limited size. To guard against piracy, unloaded library binaries are saved in a sealed format [14], ensuring that only the same instance of the enclave can load and execute those binaries in the future. As the binaries are never stored in plaintext outside the enclave, reverse engineering becomes impossible as well.

5.2. Performance Analysis

We utilized an SGX-enabled VM instance, DC4SV3 [29], provided by Microsoft Azure as our hardware setup during performance measurement. This instance features a quad-core processor and is equipped with 32 GB of RAM. We gathered performance metrics for BPPM as well as other candidates by running all tests in this identical environment. We utilized the local loopback interface during the experiment. Thus, instead of focusing on actual network latency, we concentrated on measuring the volume of data transferred over the network.
We evaluated the performance of BPPM by conducting each experiment 100 times and calculating the average result. Currently, a typical web service collects and processes around 20 to 30 personal data items [30]. Given the anticipated increase in personal data requirements in the future, we examine how performance is affected as this number ranges from 20 to 100. In our analysis, we assume that, on average, each personal data item (e.g., age or passport number) is represented by a 64-bit unsigned integer.
To our knowledge, no existing solution encompasses all the features offered by BPPM. Nonetheless, we compare BPPM with other alternatives that offer a partial set of these features. Regarding data privacy, computation on encrypted data (COED) is a newly defined umbrella term [31]. It comprises three technologies: homomorphic encryption (HE), multi-party computation (MPC), and functional encryption (FE).
We selected FE as a comparison candidate because it could be partially applicable in web service scenarios for privacy-preserving computation and policy enforcement. Although primarily used for different contexts, we also included MPC in our comparison due to its capability for privacy-preserving calculations in shared environments. We chose to exclude HE from the comparison as it operates on ciphertexts but yields results in ciphertext form, which makes it challenging to compare directly. Therefore, we are comparing BPPM against a representative MPC implementation, ABY [32], and an FE implementation, CiFEr [33].
CiFEr ciphertext ends up being 64× larger than the corresponding plaintext. This increases the required communication costs. In comparison, the communication cost of BPPM is 3 to 14× smaller. ABY uses around 7500–9000× the communication cost (mainly due to oblivious underlying transfers) of BPPM. Figure 7 compares the communication costs for performing a single data processing.
We also analyze the overhead BPPM imposes over a non-privacy-preserving situation. For that purpose, we define a typical-secure but non-privacy-preserving web service, “non-priv”, as follows: After agreeing with the privacy policy, the service user sends their personal data to the service provider over a server-authenticated TLS channel. The service provider verifies the authenticity of the received personal data and then stores that, in encrypted format, in a database. Whenever the service provider needs to perform computations on the data, it fetches the data ciphertext, decrypts it, and performs the required computations.
In BPPM, when personal data is transferred, the corresponding P C structure (occupying ∼3.5 KB in a typical scenario) is also included in the transfer. Additionally, a remote attestation process occurs with each transfer. In an SGX environment, this remote attestation usually incurs a communication cost of ∼7 KB. Consequently, compared to a non-priv transfer, BPPM involves additional communication cost of about 10.5 KB (7 KB for remote attestation plus 3.5 KB for the P C structure) when transferring personal data.
Figure 7 shows the comparison when there is just one computation on the data user’s side (i.e., | P D S [ ] | = 1 ). It is important to note that BPPM only needs one round of communication, no matter how many data-processing operations the data user performs or how many times it performs them. All processing parameters can be specified through the single P C structure (Figure 3). Although its size slightly increases with the number of processing statements (requiring ∼1 KB per processing statement), only a single transfer is enough.
In contrast, both FE and MPC require the entire network transfer process to be repeated for each processing operation, even on the same data. For them, the communication costs should be multiplied by the number of data processing statements (i.e., | P D S [ ] | ), as shown in Figure 7.
When it comes to network latency, it can vary significantly based on the current network conditions. All our experiments were conducted on a local loopback interface, so we did not include any measurements related to network latency in this analysis. Nevertheless, it is reasonable to assume that, in any specific network circumstances, the network latency for BPPM and its comparison candidates will align with the trends shown in Figure 7. Therefore, on a relative scale, BPPM will maintain the same standing as illustrated in Figure 7.
To compare the data processing time or the processing latency, we used the vector inner product as a representative example of processing functionality as this calculation is supported by all four comparison candidates. In our experiment, each vector consisted of a set of private data items, and we adjusted the number of data items from 20 to 100, recording the time requirements. BPPM is nearly twice as slow as the non-priv mode, which aligns with the anticipated performance decline associated with the enclave environment [34]. Nevertheless, BPPM is 1600–3000× faster than CiFEr and 600–750× faster than ABY (see Figure 8).
During forwarding, BPPM produces a reduced version of the P C structure while maintaining the anonymity of the D O . This could be accomplished through a specific type of RSS known as a signer-anonymous designated-verifier redactable signature scheme (AD-RS) [35]. However, RSS schemes are often slow, relying on computation-intensive cryptographic techniques, such as zero-knowledge proofs.
In BPPM, we achieve the same effect of AD-RS by utilizing the confidentiality and attested execution properties of the TEE technology. We compare the performance of BPPM with an existing RSS implementation, XMLRSS [36]. We also measure how much overhead BPPM introduces, in comparison with a normal (i.e., non-redactable) RSA-2048 signature scheme as a non-privacy-preserving authentication mechanism baseline. Unsurprisingly, BPPM is not as fast as RSA-2048, but it demonstrates a significant performance advantage over XMLRSS. BPPM generates initial signatures more than 15× faster, verifying signatures over 24× quicker, and redacting data more than 34× faster (see Figure 9).
Therefore, like other PETs, BPPM offers privacy preservation at the cost of computation and communication overhead. In summary, BPPM incurs double the processing latency and an additional 10.5 KB of network bandwidth. However, when compared to existing PETs, BPPM is decisively more efficient and offers a unique, essential feature set. We are hopeful that this combination makes BPPM a practical choice for web services.

6. Cost–Benefit Analysis

In this section, we discuss the costs and benefits incurred by the parties involved in BPPM. While this framework has not yet been implemented in the real world, we argue for its usefulness and benefits if applied.

6.1. Service Users

In addition to the previously discussed processing latency and network bandwidth considerations, implementing BPPM incurs no costs for service users. More importantly, it offers several significant privacy benefits. By utilizing the property of attested execution, BPPM guarantees that privacy policies are enforced continuously. It simplifies data-sharing decisions for users by providing a single list of all data-processing statements, eliminating the need to recursively navigate through privacy policies of all the involved parties.
This approach aligns with efforts to improve transparency and control in data-sharing practices [37]. By allowing service users to select specific subsets from a consolidated list, it facilitates privacy negotiation beyond just the primary data user. Furthermore, this unified list enables the seamless integration of existing automated privacy negotiation tools, such as P3P [38] and Privacy Bird [39], thereby reducing the burden on users.
Studies show that identical privacy policies can lead to different code implementations [40,41]. Furthermore, the privacy policy or the corresponding processing code may change after personal data has been collected, often without notifying the service users [42]. This situation creates a potential loophole for service providers. In the BPPM framework, service users not only agree to the data processing terms outlined in the privacy policy but also to the auditor-approved specific processing code, which effectively tackles these concerns.

6.2. Service Providers

In today’s environment, service providers find themselves needing to allocate substantial resources to comply with various privacy regulations such as GDPR [43], PIPEDA [44], and CCPA [45]. Table 2 offers an estimate of the costs that a mid-sized organization (with 50 to 250 employees) might incur to adhere to these three key privacy regulations.
Addressing the intricacies of compliance necessitates substantial expertise in privacy-related matters, a challenge that is especially impactful for smaller organizations [4]. However, research suggests that specific personal data is predominantly utilized for a limited range of purposes [42,46]. This insight promotes the reuse of privacy-related software, ultimately resulting in significant cost savings.
Acquiring processing code from a provider does involve some costs. While we have yet to implement BPPM in a real-world scenario, we believe that if it were applied, these costs would be significantly lower than those presented in Table 2. Crucially, service providers can now delegate auditing and privacy responsibilities to their code providers, thereby reducing complexities, and minimizing the delays typically associated with software development.
Table 2. Current compliance cost comparison across regulations.
Table 2. Current compliance cost comparison across regulations.
Cost CategoryGDPRCCPAPIPEDA
Initial incorporation of privacy features in existing softwareUSD > 100 K [47,48]USD > 100 K [49,50]USD > 30 K [51]
Annual software maintenanceUSD > 50 K [47,48]Not foundUSD > 15 K [51]
Privacy officers’ annual salaryUSD > 200 K [47,52,53]USD > 200 K [47,52,53]USD > 200 K [47,52,53]
Annual privacy consultantUSD > 10 K [47,54]USD > 10 K [47]USD > 10 K [47]
Annual privacy-specific auditUSD > 30 K [48,54]USD > 20 K [50]Not found
Total annual costUSD > 390 KUSD > 330 KUSD > 255 K
From the perspective of code providers, the potential for extensive reuse of privacy-preserving processing code allows them to continuously develop and expand their libraries. The piracy-free distribution model of BPPM not only motivates code providers but also promotes a sustainable ecosystem.
At present, there is a shortage of privacy auditors [55], and the auditing process requires considerable time, which affects business operations. BPPM effectively tackles these practical challenges by allowing the same data processing statement and its corresponding data processing code to be audited only once at the code provider’s location. This approach not only reduces ambiguity surrounding personal data processing [41,56] but also encourages the standardization of personal data processing across various organizations worldwide.
BPPM effectively minimizes the scope of auditing. In the current landscape, auditors are entrusted with the critical task of meticulously overseeing the storage and retention of personal data. They must also conduct a comprehensive analysis of all software flows associated with that data. Importantly, these flows can evolve over time, necessitating re-audits.
Since BPPM does not expose personal data in plaintext, many of the data handling and auditing requirements related to data storage are significantly lessened. Furthermore, other than the code provided by the code provider, no additional components of the service provider’s software are capable of processing personal data, thereby eliminating the need for privacy inspections across the remainder of their codebase.

7. Related Work

Numerous studies have identified solutions to address the privacy concerns of data owners. However, there is a lack of research focused on designing PETs that take into account why the data users would want to deploy them. To the best of our knowledge, the concept of bidirectional privacy has not been fully explored, and there has been no direct work on it. Still, some studies have examined various related privacy aspects of using web services, which we will discuss next.
Agrawal et al. propose the concept of Hippocratic Databases [57]. The authors define ten privacy principles for data access that preserve the privacy of the data owner (e.g., limited disclosure, consented use, and compliance). LeFevre et al. enhance this concept at the cell-level granularity of the database [58]. However, there is still a requirement for an external trusted third party, like an auditor. Essentially, the auditor’s job is to check whether everything functions according to these principles. In practice, this may be quite complex and prone to errors. The auditor must check all code, all data flows, etc., to ensure that privacy policies have been implemented properly.
There is currently a shortage of auditors [55]. Hence, to simplify the auditing process, XACML [59] can be used, where data-access rules are specified in a machine-readable XML format. A data user submits the data-access request to a policy enforcement point (PEP), which forwards it on to another model called a policy decision point (PDP). PDP permits data access after consulting the XACML policy. In this way, an auditor only needs to ensure that the stated privacy policy on the website and the corresponding XACML are aligned.
To automate the auditing of source code, static code analysis techniques are explored to examine information flow and detect any potential privacy leaks [60,61]. Zimmeck et al. in their works [62,63] extended this static code analysis-based compliance checking by using machine learning techniques.
The problem of ensuring compliance between a privacy policy and the corresponding source code implementation is tackled from another angle by generating the privacy policies from the source code itself. Researchers have used AI-ML techniques to generate an entire privacy policy from a codebase [64,65]. Proof-carrying code (PCC) [66] and automated theorem provers are also explored in this context. However, this approach restricts the developers to using only a specific source code language.
In fact, even after auditing and verifying the source code, there remains a risk of malicious activity from data users. For instance, they can add malicious code elements after the audit operation has been completed. To solve the trust issue involved with continuous enforcement, Liu et al. propose to distribute the personal data among multiple brokers [67] using Shamir’s secret sharing technique. Each of the brokers is responsible for enforcing the privacy policies on their share.
If most data brokers are honest, this method faithfully enforces the privacy policy. However, it requires enough honest data brokers to be present during the data access, which might not be practical. To mitigate this concern, the trustworthiness of a blockchain is explored, and Alansari [68] leverages smart contracts [69] to enforce policies.
Private data is often shared among multiple parties. In fact, sharing various private data in different sectors, such as health or finance, can lead to numerous benefits [70,71]. Moreover, there are situations where the sharing of collected private data is an essential requirement [72]. In this context, Iyilade et al. proposed an XML-like policy language called P2U [12] to provide a data owner with more control over their privacy.
Hails [73] is a proposed web framework for declarative policy language and an access control mechanism to enhance the privacy of the personal data being accessed by third-party applications, hosted by a website. POLICHECK [74] is built on the concept of automated data-flow checking. Here, the concept of the receiving entity is also combined during the verification of the data-flow, making it suitable for policy compliance checking where personal data is required to be shared among multiple parties.
Tracking and control of the data-sharing activities can be exercised by capturing the details of all data-sharing activities, using a trusted entity on some tamper-proof media like a blockchain [68]. Besides tracking data-sharing, it is also important to guarantee that all the data recipients process data in privacy-preserving manner. Zeinab et al. propose a trust- and reputation-based monitoring mechanism [75]. It works based on the data owner’s complaints about the suspected privacy violations. Their solution automatically detects and penalizes any data users who use personal data for non-consensual purposes.
Personal data must not be shared unnecessarily. Moreover, when sharing the personal data with a third party, the primary data collector must only reveal the necessary subsets of the received data. On the other hand, the authenticity and integrity of personal data are also important concerns for a third-party data user. Recently Li et al. used the redactable signature scheme for that purpose [76]. They propose a cloud framework for sharing health data with third parties after removing the sensitive parts of that data.
If personal data is available in plaintext form, even for a small amount of time (e.g., while performing computations using such data), re-hiding is not possible. The data can be cached and used later, covertly. Several interesting cryptographic primitives have been proposed to enable computation on encrypted data, without revealing plaintext values at all. Homomorphic encryption [77] allows computation operations on ciphertexts and returns computed results as ciphertexts too.
We have already discussed functional encryption, which allows computation of a specific function on encrypted data [78]. Unlike homomorphic encryption, it keeps the result available in plaintext, allowing others to use the result of the computation. Similarly, multi-party computation [79] is a mature cryptographic primitive, where a function is calculated jointly by multiple untrusted parties. None of these parties need to reveal their input to any of the other parties. Although it has a performance bottleneck [80], several improvements have been made [32].
Brands proposed a mechanism to encode different personal data into a structure called a digital credential [81]. With it, certain computations can be performed, without revealing the encoded data as plaintext. Additionally, it has been extended to facilitate the sharing of personal data among different parties, enabling a delegator to be authorized to show only a selected subset of the encoded personal data by a delegatee [82].

Existing TEE-Based Solutions

Pure cryptographic schemes, which allow computations on hidden data (e.g., homomorphic encryption and multi-party computation) are not so practical. Either they are expensive in terms of their required computation and communication overhead, or else they only permit a defined set of limited computations to be performed on the non-plaintext data. To augment the previous cryptographic methods, TEE technology [83] can be used to perform computation on personal data, without revealing it in plaintext. For example, A TEE can be used to implement functional encryption [84], yielding a performance improvement of up to 750,000× in the traditional cryptographic setting. Besides hiding data, the TEE also guarantees the integrity of the computation.
PESOS [85] represents one of the initial works in the realm of trusted execution environment (TEE)-based privacy policy enforcement. However, in contrast to BPPM, PESOS necessitates additional infrastructural components. Specifically, it not only relies on a TEE but also mandates the integration of a Kinetic Open Storage system. An XACML-based access control system and a TEE have been combined to ensure continued compliance [86]. Birrell et al. propose delegated monitoring [87] in which an enclave, at the data user’s location, verifies the identity and other credentials of the requesting applicants before granting them access to any personal data. However, simply confirming the recipient’s identity may not be enough; the nature of the computation they are conducting is also crucial.
BPPM addresses this aspect but is not the first to explore it. To tackle this concern, TEEKAP [88] proposed a data-sharing platform that allows data recipients to compute only a predefined function on the received data. BPPM builds upon this by incorporating multiple layers of data recipients and introducing the concept of bidirectional privacy.
Zhang et al. propose a trusted and privacy-preserving data trading platform [89] in which access control to personal data is enforced by an enclave on the data user’s side. If access to the personal data is allowed, then the data processing application must also run within an enclave to ensure that the personal data is never revealed in plaintext. However, executing a very large application may cause problems for TEE-based solutions because of the enclave’s size limitations.
MECT [28] addressed this issue by dynamically creating multiple enclaves and utilizing local attestation among them, which proved to be inefficient and complex. In this context, Wagner et al. proposed combining additional hardware infrastructure—specifically, a trusted platform module (TPM [90])—with a TEE to support a larger code space [86]. In contrast, BPPM adopts a different approach to tackle the problem. In BPPM, code components are stored as encrypted dynamic libraries and loaded on demand. This approach not only reduces the need for additional hardware but also significantly enhances efficiency.
Executing application code within an enclave is essential to ensure that personal data remains undisclosed in plaintext. However, this approach alone does not sufficiently guarantee privacy. Malicious code could be crafted to expose private information directly as the output of the computation. Therefore, it is necessary to verify the privacy preservation properties of the code prior to its execution within the enclave.
With the concept of template code [28], a data owner can specify code that is permitted to execute solely on their personal data within the enclave. Nevertheless, this method requires the data owner to review the template code and the corresponding privacy parameters, which often demands a level of expertise and time that is impractical in many situations. BPPM addresses this issue by incorporating privacy auditors. In BPPM, a privacy auditor verifies the privacy properties of the code and provides a signature. The data owner only needs to verify this signature before allowing the code to execute at the data user’s end.
In contrast, we continue to depend on human auditors for the intricate task of verifying the privacy properties of source code. However, with the ability to reuse the same code, BPPM significantly reduces the auditors’ workload. Another practical concern in this area is that code providers may hesitate to disclose their source code even while allowing others to use their software as binaries. Furthermore, these binaries should not be shared unlawfully as that could undermine the profitability and motivation of the code provider. BPPM effectively mitigates these concerns by ensuring that all code binaries always remain encrypted and are tied to a specific TEE.

8. Conclusions

In web-service scenarios, private data may be shared among a hierarchy of service providers in a “layered” manner. We propose a solution framework called BPPM, designed to address several privacy-related challenges in these contexts. With BPPM, service users can be assured that their private data will never be disclosed in plaintext and will only be used for approved purposes. Additionally, BPPM streamlines the decision-making process for service users concerning data sharing by providing them with a consolidated list of proposed data processing statements, even when services are delivered through a complex hierarchy of service providers.
In addition to addressing various privacy concerns for service users, BPPM also safeguards the trade secrets of service providers. To ensure the service users’ privacy, service providers in BPPM are not required to disclose the identities or even the existence of their downstream sub-service providers (or suppliers), nor are they required to reveal the identities of their service users to these downstream providers. Nevertheless, the authenticity of the service users’ private data can still be verified downstream.
Additionally, BPPM addresses various practical challenges and offers multiple incentives for service providers to engage with and adopt this framework. For instance, it minimizes auditing costs by introducing a mechanism that allows for the use of pre-audited code in a manner that safeguards against piracy while also helping to mitigate data breaches and their associated expenses.
We conducted a formal analysis of the claimed security and privacy features of BPPM and demonstrate that it is secure within a UC framework. Furthermore, we implemented this framework and assessed its performance metrics, which indicate that, in comparison to a standard web service without privacy protections, BPPM offers numerous privacy features (and other benefits) with minimal overhead. These performance metrics further reinforce its viability for practical adoption.

Author Contributions

Conceptualization, S.K.P. and D.A.K.; methodology, S.K.P. and D.A.K.; software, S.K.P.; validation, S.K.P. and D.A.K.; formal analysis, S.K.P.; investigation, S.K.P.; resources D.A.K.; data curation, S.K.P. and D.A.K.; writing—original draft preparation, S.K.P. and D.A.K.; writing—review and editing, S.K.P. and D.A.K.; supervision, D.A.K.; project administration, D.A.K. All authors have read and agreed to the published version of the manuscript.

Funding

This project was made possible in-part through the support of the National Cybersecurity Consortium and the Government of Canada (CSIN).

Data Availability Statement

The implementation of our solution is publicly available at https://github.com/sumitkumarpaul/bidirectional_privacy.git (accessed on 1 October 2025). This repository also includes the necessary information on how to use our solution and how to reproduce the performance results on a TEE-enabled platform.

Acknowledgments

During the preparation of this manuscript, the authors used https://app.grammarly.com (accessed on 31 October 2025) to fix grammatical errors and rephrase the text. The authors utilized https://chatgpt.com (accessed on 31 October 2025) and https://www.perplexity.ai (accessed on 31 October 2025) to gather some of the supporting references. The authors have reviewed and edited the output and take full responsibility for the content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
BPPMBidirectional Privacy Preservation for Multi-Layer Data Sharing in Web Services
TEETrusted Execution Environment
COEDComputation On Encrypted Data
HEHomomorphic Encryption
MPCMulti Party Computation
PETPrivacy Enhancing Technology
RSSRedactable Signature Scheme

Appendix A. Situations Where Service Providers Need Privacy

In today’s world, many service providers offer Web-APIs that can be utilized by others, including other service providers. For instance, let us consider D U 1 , 1 , a streaming media company, with an extensive collection of movies from across the globe. D U 1 , 1 collects private data such as age, preferences, time of watch, and duration of watch from their consumer. Upon studying some statistics, D U 1 , 1 realizes that some clients are keen on watching Japanese movies and other content but only understand English and want to subscribe to their voice-dubbed versions.
However, obtaining high-quality dubbing for Japanese movies is a challenging task. Assume that various competing streaming media companies attempt to dub but fail. D U 1 , 1 finds an AI-based translator company, D U 2 , 1 , that currently specializes in services for text document translation. Their specialty is that they provide Web-APIs, which can translate documents, according to the audience’s context and personality (age, interest, etc.). D U 1 , 1 discovers that they can use D U 2 , 1 ’s APIs in a specific manner, which could work perfectly for dubbing Japanese movies.
Consequently, D U 1 , 1 enters into an NDA contract with D U 2 , 1 and utilizes their Web-API for dubbing numerous Japanese movies in their collection. As a result, D U 1 , 1 becomes popular for their Japanese movie collection, which now has excellent dubbing quality. D U 1 , 1 would like to hide the identity of D U 2 , 1 , and even the existence of a text translator organization, behind the dubbing. In this example, the data user ( D U 1 , 1 ) has a trade secret that they wish to keep private, which is independent of, but related to, the privacy preservation mechanisms required for their own subscribers (i.e., bidirectional privacy is needed).
As another example, a supplier’s reputation can become a service provider’s reputation. In some cases, changing a supplier list may negatively affect an organization’s reputation, even if the quality of service remains within the bounds. However, depending on the circumstances, it may be necessary to make such a change. Suppose, D U 1 , 1 discovers a new and promising but less reputable supplier ( D U 2 , 1 ) that has the potential to enhance its profit margin while maintaining the quality of service. A more concerning scenario is that if the previous supplier ( D U 2 , 1 ) is no longer available (e.g., a supplier operation closing because of political circumstances), it becomes crucial to find a new supplier ( D U 2 , 1 ). This situation would not be as problematic if the identity of the supplier had been kept secret from the start.
At times, suppliers may also prefer to keep their identities hidden. For instance, D U 2 , 1 , a medicine manufacturing company that operates in one country and has been supplying critical medicines to D U 1 , 1 , a medicine seller company located in another country, for the past two decades. However, because of a very recent scandal, D U 1 , 1 has just been discredited and placed on a “watch list” of companies with questionable business practices. This situation poses a potential risk to the reputation of D U 2 , 1 , despite their adherence to ethical business practices and their lack of awareness regarding D U 1 , 1 ’s misconduct. In retrospect, it would have been beneficial for D U 2 , 1 to have maintained confidentiality regarding its identity from the outset.
As a final example, to reduce operating costs, international banks may outsource the task of credit history checking to another organization ( D U 2 , 1 ) that operates near the client’s location. However, there have been instances wherein influential clients managed to obtain favorable credit checks by exerting pressure and offering bribes to the verifying organization [91]. Such influence would not have been feasible if the identities of the client and the verifying organization remained concealed from one another.

Appendix B. TEE and Its Formal Representation

A TEE-enabled platform allows the creation of an isolated and secure software execution environment, known as an enclave. Each enclave can be configured to execute a specific piece of code.
A TEE provides two principal features: (a) attested execution: this guarantees that, after the creation of the enclave, executing code cannot be altered by anyone, and (b) confidentiality: after creation, the contents of an enclave (both code and data) always remain in an encrypted state, and only a trusted hardware CPU core, bound to the TEE, can access that.
A formal model of a TEE is required to analyze the security of TEE-related protocols. Pass et al. formally defined G a t t in [13], which represents the ideal functionality for trusted execution environments. A party (P) with a TEE access first creates a new enclave to run the program code ( p r o g ) by an install ( ) call. On success, G a t t returns a unique ID, known as enclave ID ( e i d ), to the caller. The program code defines several entry points to the enclave.
After obtaining the e i d , P can invoke those entry points by a resume ( ) call with corresponding inputs. Accordingly, the enclave performs a specific computation according to the p r o g . When the computation is complete, the output o u t p and its attestation are returned to the caller. The attestation of the o u t p is denoted by σ T E E ( o u t p ) , which is a signature produced by a secret key s k T E E , stored in TEE hardware. σ T E E ( o u t p ) is generated after combining the e i d , p r o g , and o u t p together: σ T E E ( o u t p ) = Σ T E E . S i g ( s k T E E , ( e i d , p r o g , o u t p ) ) .
To verify this σ T E E ( o u t p ) , the corresponding public key p k T E E can be obtained by invoking G a t t . g e t p k ( ) . Successful verification of σ T E E ( o u t p ) confirms that the specific instance of the enclave, executing the exact same expected code on a legitimate TEE platform, produced the computation result.

Appendix C. Further Details of Protocol

Appendix C.1. Detailed Sequence of Setup-Stage

Each data user must complete this stage before processing any request service request.
D U i , j first creates and initializes a new enclave (see Figure A1: steps 1 through 3): After setting up the internal data structures, D U i , j launches a new enclave that executes the predefined trusted base code, P r o g E c l v (refer to Figure 5). Subsequently, D U i , j requests the newly established enclave to generate an asymmetric key pair ( e p k e s k ). The enclave reveals the public-key e p k while keeping e s k secret. This key pair enables both the D O and the C P to transmit messages securely to that specific enclave. Before using e p k , the remote party checks the attestation of e p k (as detailed in Appendix B) to confirm its authenticity.
Next, D U i , j acquires audited code ciphertexts from the C P (steps 4 through 6): An honest-but-curious network observer may track the interactions between D U i , j and C P to learn the number of code components received from C P . This might indicate out of | P D S i , j | how much processing will be achieved locally by D U i , j . To mitigate this risk, D U i , j employs dummy requests when communicating with the C P , effectively masking the actual number of code components processed locally. The C P subsequently returns encrypted versions of all licensed code to D U i , j , along with additional dummy (unlicensed stand-in) code. Additionally, the C P pads both the licensed and dummy code to ensure they have the same length, rendering them indistinguishable from one another.
Figure A1. Detailed sequence of “setup” stage.
Figure A1. Detailed sequence of “setup” stage.
Computers 14 00484 g0a1
D U i , j inserts the code ciphertexts and becomes ready (steps 7 through 10): D U i , j could be dishonest and could try to insert a malicious source code to its enclave that leaks private information during execution. Hence, after decrypting, the base code ( P r o g E c l v ) of the enclave first verifies the code hash and auditor signature. Since the enclaves are limited in size, this obtained code components are saved outside of the enclave but in a sealed format [14]. The sealed format ensures that only this specific enclave will be able to load and execute that sealed code in the future.

Appendix C.2. Detailed Sequence of Send Original Data Stage

This stage (Figure A2) executes when the D O sends their personal data and approved processing consent to the D U 1 , 1 ’s enclave. The enclave verifies their authenticity and then reveals the plaintext version of the consent structure ( P C ) to D U 1 , 1 while keeping D secret.
Figure A2. Detailed sequence of “Send Original Data” stage.
Figure A2. Detailed sequence of “Send Original Data” stage.
Computers 14 00484 g0a2
The D O sends D and P C to D U 1 , 1 ’s enclave (Figure A2, steps 1 through 6): For this, the D O first agrees to a subset of D U 1 , 1 -proposed data processing statements and accordingly fills P C . P D S [ ] and D . D S [ ] . To prove the authenticity, D O also signs them digitally. However, the D O cannot simply use their long-term secret key, s k D O , to generate these signatures since that would require the D O to share the s k D O to produce a signature on the reduced version of the consent (required during the forwarding phase), which is not ideal for maintaining security. Additionally, the P C structure is visible outside the enclave environment. Hence the curious data user may backtrack the D O ’s identity from the signature.
Thus, the D O generates a fresh key pair p k c o n s k c o n for generating P C . S C , which acts as a signer-anonymous redactable signature [35]. As p k c o n s k c o n is freshly generated, data users cannot detect the identity of the D O by observing P C . S C . However, D O certifies p k c o n and D with another signature, D . D O S , this time using its long-term secret key, s k D O . This is not an issue as only the trusted enclave can view and verify this signature. Furthermore, to facilitate the redaction operation, the D O stores s k c o n in D . C S K , ensuring that it can only be utilized by the trusted enclave to generate a new consent signature for a subset of P C . P D S [ ] . Subsequently, the D O computes and fills all other fields of D and P C and sends them to D U 1 , 1 , encrypted using the enclave’s public key, e p k .
D U 1 , 1 ’s enclave verifies the received information (steps 7 through 10): After receiving the consent and data ciphertexts ( P C c t and D c t ), D U 1 , 1 passes them to its enclave. If D U 1 , 1 is malicious at this point, it may try to replay personal data sent by one data owner with the consent sent by another data owner or replay a previous version of personal data sent by the same data owner with a new consent.
Therefore, after decrypting P C c t and D c t , the enclave verifies the link between P C and D, by inspecting P C . S C and P C . D H . Since D O never shares s k D O , D U 1 , 1 also cannot attempt any forgery attack. On the other hand, a malicious D O may try to gain an advantage while accessing the service by replaying the encrypted version of a different data owner’s D with its own P C structure. The enclave also ensures this has not happened by inspecting D . D O C , D . D O S , P C . V K , and P C . S C and their interconnection.
The enclave stores them securely and only reveals the plaintext P C (steps 11 and 12): After verification, the enclave stores both P C and D in a sealed format and then reveals plaintext P C to D U 1 , 1 . If the D O has not provided consent for all data processing activities, certain optional services may not be available. Thus, the plaintext P C enables D U 1 , 1 to clearly understand the consents granted by the D O , allowing them to identify which services can be offered based on those consents. Finally, the enclave links both the data (D) and the P C with a newly created data identifier ( D i d ). Thus, D U 1 , 1 may use that D i d in the future to request processing regarding this specific data–consent pair.

Appendix C.3. Detailed Sequence of Process-Stage

D U i , j obtains a data–consent pair either from the D O (if i = j = 1 ) or from parent ( D U i , j ) . After that, in this data processing stage (Figure A3) D U i , j ’s enclave performs the requested computation on the personal data and returns the result to D U i , j .
D U i , j ’s enclave performs processing and returns the result (Figure A3, steps 1 through 5): Since only the trusted enclave can access the personal data, D U i , j requests its enclave with a specific data-processing statement, S, with the D i d index to specify a particular data–consent pair. It might happen that, even if S is present within L P D S i , j , the sender of the data (i.e., either the D O or parent ( D U i , j ) ) has not allowed S. Therefore, before processing, the enclave verifies whether S is present within the consent ( P C . P D S [ ] ).
Figure A3. Detailed sequence of “process” stage.
Figure A3. Detailed sequence of “process” stage.
Computers 14 00484 g0a3
If present, then the D U i , j ’s enclave locates the sealed code corresponding to S and loads that within the enclave. Next, executes that code on the specified personal data and returns the computation result to D U i , j . Since TEEs can provide stateful obfuscation [13], our solution can also provide differentially private [15] computation result as well. To achieve that, the enclave may optionally add some noise before outputting the result. The enclave can also keep track of the privacy budget in a D i d specific internal state. If the privacy budget is depleted, the enclave does not reveal the result.

Appendix C.4. Detailed Sequence of Forward-Stage

In this phase, D U i , j forwards the received personal data to any of its c h i l d nodes for further processing (Figure A4). Before forwarding, D U i , j may reduce the data-processing capability. D U i , j may re-enter into this stage multiple times to forward the same personal data to different child nodes with possibly different processing capabilities.
Figure A4. Detailed sequence of “forward” stage.
Figure A4. Detailed sequence of “forward” stage.
Computers 14 00484 g0a4
D U i , j ’s enclave prepares the personal data and reduced consent (Figure A4 Steps 1 to 6):
Since D U i , j does not have direct access to the personal data, it cannot produce a new signature on the reduced version of the consent. It requests its enclave for that purpose. Then the enclave removes the requested P D S r e m [ ] from P C . P D S [ ] and creates the corresponding consent structure ( F P C ). The enclave also generates the signature F P C . S C using the secret key D . C S K . Since only the D O and the trusted enclave can access this signing key, this new signature cannot be forged. Subsequently, D U i , j ’s enclave encrypts the data (D) and F P C using c h i l d ’s enclave’s public key, then outputs them to D U i , j for forwarding to the c h i l d node.
It is to be noted that D . D S [ ] remains unchanged as the minor bandwidth savings do not justify the added protocol complexity, especially since D . D S [ ] is typically small in most applications. The child’s enclave will only process the subset of D . D S [ ] that is allowed by F P C . P D S [ ] . Hence, even if additional data items exist in D . D S [ ] , they will remain inaccessible to the c h i l d .
D U i , j ’s enclave verifies and stores the received items (Steps 7 to 10):
D U i , j delivers the ciphertexts to the child’s enclave by issuing a remote call. Upon receiving the ciphertexts, the operations within the enclave closely resemble those in the Send Original Data phase. Specifically, the enclave decrypts the ciphertexts, verifies their authenticity, ensures the linkage between them, and then saves them in the sealed format, which can be used in the future. At the end, the enclave reveals the plaintext consent to the child node.

References

  1. Gdpr and Third Parties: What Companies Need to Know. 2025. Available online: https://www.riddlecompliance.com/blog/gdpr-and-third-parties-what-companies-need-to-know (accessed on 1 October 2025).
  2. How Much Does a SOC 2 Audit Cost? 2025. Available online: https://secureframe.com/hub/soc-2/audit-cost (accessed on 1 October 2025).
  3. Freitas, M.D.C.; Mira Da Silva, M. GDPR Compliance in SMEs: There is much to be done. J. Inf. Syst. Eng. Manag. 2018, 3, 30. [Google Scholar] [CrossRef]
  4. Tetteh, A.K. Cybersecurity needs for SMEs. Issues Inf. Syst. 2024, 25, 235–246. [Google Scholar] [CrossRef]
  5. Awan, M.; Alam, A.; Kamran, M. Cybersecurity Challenges in Small and Medium Enterprises: A Scoping Review. J. Cyber Secur. Risk Audit. 2025, 2025, 89–102. [Google Scholar] [CrossRef]
  6. Singla, S. Regulatory Costs and Market Power. SSRN Electron. J. 2023. [Google Scholar] [CrossRef]
  7. Chambers, D.; Collins, C.A.; Krause, A. How do federal regulations affect consumer prices? An analysis of the regressive effects of regulation. Public Choice 2019, 180, 57–90. [Google Scholar] [CrossRef]
  8. Berghel, H. “Free” Online Services and Gonzo Capitalism. Computer 2023, 56, 86–92. [Google Scholar] [CrossRef]
  9. Malhotra, H. Trade secret in intellectual property. SSRN Electron. J. 2021. [Google Scholar] [CrossRef]
  10. Study on Trade Secrets and Confidential Business Information in the Internal Market. 2013. Available online: https://ec.europa.eu/docsroom/documents/27703/attachments/1/translations/en/renditions/native (accessed on 1 October 2025).
  11. Demirel, D.; Derler, D.; Hanser, C.; Pöhls, H.; Slamanig, D.; Traverso, G. D4.4: Overview of Functional and Malleable Signature Schemes. 2017. PRISMACLOUD. Available online: https://www.prismacloud.eu/PRISMACLOUD-D4.4-Overview-of-Functional-and-Malleable-Signature-Schemes.pdf (accessed on 1 October 2025).
  12. Iyilade, J.; Vassileva, J. P2U: A Privacy Policy Specification Language for Secondary Data Sharing and Usage. In Proceedings of the 2014 IEEE Security and Privacy Workshops, San Jose, CA, USA, 17–18 May 2014; pp. 18–22. [Google Scholar] [CrossRef]
  13. Pass, R.; Shi, E.; Tramèr, F. Formal Abstractions for Attested Execution Secure Processors. In Advances in Cryptology—EUROCRYPT 2017; Coron, J.S., Nielsen, J.B., Eds.; Springer International Publishing: Paris, France, 2017; Volume 10210, pp. 260–289. [Google Scholar] [CrossRef]
  14. Anati, I.; Gueron, S.; Johnson, S.P.; Scarlata, V.R. Innovative Technology for CPU Based Attestation and Sealing. 2013. Available online: https://www.intel.com/content/dam/develop/external/us/en/documents/hasp-2013-innovative-technology-for-attestation-and-sealing-413939.pdf (accessed on 1 October 2025).
  15. Dwork, C. Differential Privacy. In Automata, Languages and Programming; Bugliesi, M., Preneel, B., Sassone, V., Wegener, I., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; Volume 4052, pp. 1–12. [Google Scholar] [CrossRef]
  16. New Intel Chips Won’t Play Blu-Ray Disks due to SGX Deprecation. Available online: https://www.reddit.com/r/gadgets/comments/s3xhkf/new_intel_chips_wont_play_bluray_disks_due_to_sgx/ (accessed on 1 October 2025).
  17. Confidential Computing Market Worth $59.4 Billion by 2028—Exclusive Report by MarketsandMarkets. 2023. Available online: https://www.benzinga.com/pressreleases/23/06/n32692290/confidential-computing-market-worth-59-4-billion-by-2028-exclusive-report-by-marketsandmarkets (accessed on 1 October 2025).
  18. Trusted Execution Environment Provisioning (TEEP) Architecture. 2023. Available online: https://datatracker.ietf.org/doc/rfc9397/ (accessed on 1 October 2025).
  19. Fei, S.; Yan, Z.; Ding, W.; Xie, H. Security Vulnerabilities of SGX and Countermeasures: A Survey. ACM Comput. Surv. 2021, 54, 126. [Google Scholar] [CrossRef]
  20. Canetti, R. Universally composable security: A new paradigm for cryptographic protocols. In Proceedings of the 42nd IEEE Symposium on Foundations of Computer Science, Newport Beach, CA, USA, 8–11 October 2001; pp. 136–145. [Google Scholar] [CrossRef]
  21. Cheng, R.; Zhang, F.; Kos, J.; He, W.; Hynes, N.; Johnson, N.; Juels, A.; Miller, A.; Song, D. Ekiden: A Platform for Confidentiality-Preserving, Trustworthy, and Performant Smart Contracts. In Proceedings of the 2019 IEEE European Symposium on Security and Privacy (EuroS&P), Stockholm, Sweden, 17–19 June 2019; pp. 185–200. [Google Scholar] [CrossRef]
  22. Lind, J.; Naor, O.; Eyal, I.; Kelbert, F.; Sirer, E.G.; Pietzuch, P. Teechain: A secure payment network with asynchronous blockchain access. In Proceedings of the 27th ACM Symposium on Operating Systems Principles, Huntsville, ON, Canada, 27–30 October 2019; pp. 63–79. [Google Scholar] [CrossRef]
  23. Bidirectional Privacy. Computer Software. Available online: https://github.com/sumitkumarpaul/bidirectional_privacy.git (accessed on 1 October 2025).
  24. Gramine Library OS with Intel SGX Support. 2025. Available online: https://github.com/gramineproject/gramine (accessed on 1 October 2025).
  25. Tsai, C.C.; Porter, D.E.; Vij, M. Graphene-SGX: A practical library OS for unmodified applications on SGX. In Proceedings of the 2017 USENIX Conference on Usenix Annual Technical Conference (USENIX ATC’17), Santa Clara, CA, USA, 12–14 July 2017; USENIX Association: Santa Clara, CA, USA, 2017; pp. 645–658. [Google Scholar]
  26. Mbed TLS. 2025. Available online: https://github.com/Mbed-TLS/mbedtls (accessed on 1 October 2025).
  27. Knauth, T.; Steiner, M.; Chakrabarti, S.; Lei, L.; Xing, C.; Vij, M. Integrating Remote Attestation with Transport Layer Security. arXiv 2019, arXiv:1801.05863. [Google Scholar] [CrossRef]
  28. Zum Felde, H.M.; Morbitzer, M.; Schutte, J. Securing Remote Policy Enforcement by a Multi-Enclave based Attestation Architecture. In Proceedings of the 2021 IEEE 19th International Conference on Embedded and Ubiquitous Computing (EUC), Shenyang, China, 20–22 October 2021; pp. 102–108. [Google Scholar] [CrossRef]
  29. DCsv3 and DCdsv3-Series. 2025. Available online: https://learn.microsoft.com/en-us/azure/virtual-machines/dcv3-series (accessed on 1 October 2025).
  30. Uncovering the Apps That Actually Respect Your Privacy. 2025. Available online: https://surfshark.com/apps-that-track-you (accessed on 1 October 2025).
  31. Han, K.H.; Lee, W.K.; Karmakar, A.; Mera, J.M.B.; Hwang, S.O. cuFE: High Performance Privacy Preserving Support Vector Machine With Inner-Product Functional Encryption. IEEE Trans. Emerg. Top. Comput. 2023, 12, 328–343. [Google Scholar] [CrossRef]
  32. Demmler, D.; Schneider, T.; Zohner, M. ABY—A Framework for Efficient Mixed-Protocol Secure Two-Party Computation. In Proceedings of the 2015 Network and Distributed System Security Symposium, San Diego, CA, USA, 7 February 2015. [Google Scholar] [CrossRef]
  33. CiFEr—Functional Encryption Library. Computer Software. Available online: https://github.com/fentec-project/CiFEr (accessed on 1 October 2025).
  34. Li, F.; Li, X.; Gao, M. Secure MLaaS with Temper: Trusted and Efficient Model Partitioning and Enclave Reuse. In Proceedings of the Annual Computer Security Applications Conference, Austin, TX, USA, 4–8 December 2023; pp. 621–635. [Google Scholar] [CrossRef]
  35. Derler, D.; Krenn, S.; Slamanig, D. Signer-Anonymous Designated-Verifier Redactable Signatures for Cloud-Based Data Sharing. In Cryptology and Network Security; Foresti, S., Persiano, G., Eds.; Springer International Publishing: Milan, Italy, 2016; Volume 10052, pp. 211–227. [Google Scholar] [CrossRef]
  36. XMLRSS—A Java Crypto Provider for Redactable Signatures. Computer Software. 2018. Available online: https://github.com/woefe/xmlrss (accessed on 1 October 2025).
  37. The Transparency & Consent Framework (TCF). 2023. Available online: https://iabeurope.eu/transparency-consent-framework/ (accessed on 1 October 2025).
  38. Cranor, L.F. P3P: Making privacy policies more useful. IEEE Secur. Priv. 2003, 1, 50–55. [Google Scholar] [CrossRef]
  39. Privacy Bird: Find web Sites That Respect Your Privacy. Computer Software. Available online: http://www.privacybird.org/ (accessed on 1 October 2025).
  40. Horstmann, S.A.; Domiks, S.; Gutfleisch, M.; Tran, M.; Acar, Y.; Moonsamy, V.; Naiakshina, A. “Those things are written by lawyers, and programmers are reading that.” Mapping the Communication Gap Between Software Developers and Privacy Experts. Proc. Priv. Enhancing Technol. 2024, 2024, 151–170. [Google Scholar] [CrossRef]
  41. Müller-Tribbensee, T. Privacy Promise vs. Tracking Reality in Pay-or-Tracking Walls. In Privacy Technologies and Policy; Jensen, M., Lauradoux, C., Rannenberg, K., Eds.; Springer Nature: Karlstad, Sweden, 2024; Volume 14831, pp. 168–188. [Google Scholar] [CrossRef]
  42. Wagner, I. Privacy Policies across the Ages: Content of Privacy Policies 1996–2021. ACM Trans. Priv. Secur. 2023, 26, 1–32. [Google Scholar] [CrossRef]
  43. General Data Protection Regulation GDPR. 2025. Available online: https://gdpr-info.eu/ (accessed on 1 October 2025).
  44. The Personal Information Protection and Electronic Documents Act (PIPEDA). 2025. Available online: https://www.priv.gc.ca/en/privacy-topics/privacy-laws-in-canada/the-personal-information-protection-and-electronic-documents-act-pipeda/ (accessed on 1 October 2025).
  45. California Consumer Privacy Act (CCPA). 2024. Available online: https://oag.ca.gov/privacy/ccpa (accessed on 1 October 2025).
  46. Del Alamo, J.M.; Guaman, D.S.; García, B.; Diez, A. A systematic mapping study on automated analysis of privacy policies. Computing 2022, 104, 2053–2076. [Google Scholar] [CrossRef]
  47. CyberSierra. Hidden GDPR Compliance Expenses. 2025. Available online: https://cybersierra.co/blog/hidden-gdpr-compliance-expenses/ (accessed on 1 October 2025).
  48. Thoropass. GDPR Audit Cost: A Guide. 2024. Available online: https://thoropass.com/blog/compliance/gdpr-audit-cost-a-guide/ (accessed on 1 October 2025).
  49. Dorsey. AG Estimates Costs of CCPA Compliance. 2019. Available online: https://www.dorsey.com/newsresources/publications/client-alerts/2019/12/ccpa-update (accessed on 1 October 2025).
  50. Thoropass. CCPA Audit Cost: A Guide. 2024. Available online: https://thoropass.com/blog/compliance/ccpa-audit-cost-a-guide/ (accessed on 1 October 2025).
  51. Sprinto. Complete Guide to PIPEDA Compliance. 2025. Available online: https://sprinto.com/blog/pipeda-compliance/ (accessed on 1 October 2025).
  52. ZipRecruiter. Salary: Chief Privacy Officer (October 2025) United States. 2025. Available online: https://www.ziprecruiter.com/Salaries/Chief-Privacy-Officer-Salary (accessed on 1 October 2025).
  53. USF Health Online. What Is a Chief Privacy Officer? Job Description & Salary. 2024. Available online: https://www.usfhealthonline.com/resources/health-informatics/chief-privacy-officer-job-description-salary/ (accessed on 1 October 2025).
  54. Sprinto. How Much Does GDPR Compliance Cost in 2025? 2025. Available online: https://sprinto.com/blog/gdpr-compliance-cost/ (accessed on 1 October 2025).
  55. Curtis, B. Technical Competency Gaps in 151,000 IT Auditors in the Audit Industry. 2022. Available online: https://www.securitymagazine.com/articles/98555-technical-competency-gaps-in-151-000-it-auditors-in-the-audit-industry (accessed on 1 October 2025).
  56. Pan, S.; Zhang, D.; Staples, M.; Xing, Z.; Chen, J.; Xu, X.; Hoang, T. Is It a Trap? A Large-scale Empirical Study And Comprehensive Assessment of Online Automated Privacy Policy Generators for Mobile Apps. In Proceedings of the 33rd USENIX Security Symposium (USENIX Security 24), Philadelphia, PA, USA, 14–16 August 2024; pp. 5681–5698. [Google Scholar]
  57. Agrawal, R.; Kiernan, J.; Srikant, R.; Xu, Y. Hippocratic Databases. In Proceedings of the VLDB’02: Proceedings of the 28th International Conference on Very Large Databases, Hong Kong, China, 20–23 August 2002; pp. 143–154. [Google Scholar] [CrossRef]
  58. LeFevre, K. Limiting Disclosure in Hippocratic Databases. In Proceedings of the VLDB’04: Proceedings of the International Conference on Very Large Data Bases, Toronto, ON, Canada, 31 August–3 September 2004; Volume 30, pp. 108–119. [Google Scholar]
  59. eXtensible Access Control Markup Language (XACML) Version 3.0. 2013. Available online: https://docs.oasis-open.org/xacml/3.0/xacml-3.0-core-spec-os-en.html (accessed on 1 October 2025).
  60. Kunz, I.; Weiss, K.; Schneider, A.; Banse, C. Privacy Property Graph: Towards Automated Privacy Threat Modeling via Static Graph-based Analysis. Proc. Priv. Enhancing Technol. 2023, 2023, 171–187. [Google Scholar] [CrossRef]
  61. Tang, F.; Østvold, B.M. Assessing software privacy using the privacy flow-graph. In Proceedings of the 1st International Workshop on Mining Software Repositories Applications for Privacy and Security, Singapore, 18 November 2022; pp. 7–15. [Google Scholar] [CrossRef]
  62. Zimmeck, S.; Wang, Z.; Zou, L.; Iyengar, R.; Liu, B.; Schaub, F.; Wilson, S.; Sadeh, N.; Bellovin, S.M.; Reidenberg, J. Automated Analysis of Privacy Requirements for Mobile Apps. In Proceedings of the 2017 Network and Distributed System Security Symposium, San Diego, CA, USA, 27 February 2017. [Google Scholar] [CrossRef]
  63. Zimmeck, S.; Bellovin, S.M. Privee: An architecture for automatically analyzing web privacy policies. In Proceedings of the 23rd USENIX Conference on Security Symposium (SEC’14), San Diego, CA, USA, 20–22 August 2014; USENIX Association: San Diego, CA, USA, 2014; pp. 1–16. [Google Scholar]
  64. Yu, L.; Zhang, T.; Luo, X.; Xue, L. AutoPPG: Towards Automatic Generation of Privacy Policy for Android Applications. In Proceedings of the 5th Annual ACM CCS Workshop on Security and Privacy in Smartphones and Mobile Devices, Denver, CO, USA, 12 October 2015; pp. 39–50. [Google Scholar] [CrossRef]
  65. Zimmeck, S.; Goldstein, R.; Baraka, D. PrivacyFlash Pro: Automating Privacy Policy Generation for Mobile Apps. In Proceedings of the 2021 Network and Distributed System Security Symposium, Virtual Event, 21–25 February 2021. [Google Scholar] [CrossRef]
  66. Necula, G.C. Proof-Carrying Code. Design and Implementation. In Proof and System-Reliability; Schwichtenberg, H., Steinbrüggen, R., Eds.; Springer: Dordrecht, The Netherlands, 2002; pp. 261–288. [Google Scholar] [CrossRef]
  67. Liu, K.; Wang, Q.; Han, J.; Wu, H. A Privacy Protection Method for P2P-based Web Service Discovery. In Proceedings of the IEEE International Conference on E-Business Engineering (ICEBE’07), Hong Kong, China, 24–26 October 2007; pp. 551–558. [Google Scholar] [CrossRef]
  68. Alansari, S. A Blockchain-Based Approach for Secure, Transparent and Accountable Personal Data Sharing. Ph.D. Thesis, University of Southampton, Southampton, UK, 2020. [Google Scholar]
  69. What Are Smart Contracts on Blockchain? 2025. Available online: https://www.ibm.com/topics/smart-contracts (accessed on 1 October 2025).
  70. Hulsen, T. Sharing Is Caring—Data Sharing Initiatives in Healthcare. Int. J. Environ. Res. Public Health 2020, 17, 3046. [Google Scholar] [CrossRef] [PubMed]
  71. The Next Generation of Data-Sharing in Financial Services: Using Privacy Enhancing Techniques to Unlock New Value. 2019. Available online: https://www3.weforum.org/docs/WEF_Next_Gen_Data_Sharing_Financial_Services.pdf (accessed on 1 October 2025).
  72. Goasduff, L. Data Sharing Is a Business Necessity to Accelerate Digital Business. 2021. Available online: https://www.gartner.com/smarterwithgartner/data-sharing-is-a-business-necessity-to-accelerate-digital-business (accessed on 1 October 2025).
  73. Giffin, D.; Levy, A.; Stefan, D.; Terei, D.; Mazières, D.; Mitchell, J.; Russo, A. Hails: Protecting data privacy in untrusted web applications. J. Comput. Secur. 2017, 25, 427–461. [Google Scholar] [CrossRef]
  74. Andow, B.; Mahmud, S.Y.; Whitaker, J.; Enck, W.; Reaves, B.; Singh, K.; Egelman, S. Actions speak louder than words: Entity-sensitive privacy policy and data flow analysis with POLICHECK. In Proceedings of the 29th USENIX Conference on Security Symposium (SEC’20), Virtual, 12–14 August 2020; USENIX Association: San Diego, CA, USA, 2020; pp. 985–1002. [Google Scholar]
  75. Noorian, Z.; Iyilade, J.; Mohkami, M.; Vassileva, J. Trust Mechanism for Enforcing Compliance to Secondary Data Use Contracts. In Proceedings of the 2014 IEEE 13th International Conference on Trust, Security and Privacy in Computing and Communications, Beijing, China, 24–26 September 2014; pp. 519–526. [Google Scholar] [CrossRef]
  76. Li, S.; Han, J.; Tong, D.; Cui, J. Redactable Signature-Based Public Auditing Scheme With Sensitive Data Sharing for Cloud Storage. IEEE Syst. J. 2022, 16, 3613–3624. [Google Scholar] [CrossRef]
  77. Marcolla, C.; Sucasas, V.; Manzano, M.; Bassoli, R.; Fitzek, F.H.P.; Aaraj, N. Survey on Fully Homomorphic Encryption, Theory, and Applications. Proc. IEEE 2022, 110, 1572–1609. [Google Scholar] [CrossRef]
  78. Mascia, C.; Sala, M.; Villa, I. A survey on functional encryption. Adv. Math. Commun. 2023, 17, 1251–1289. [Google Scholar] [CrossRef]
  79. Lindell, Y. Secure multiparty computation. Commun. ACM 2020, 64, 86–96. [Google Scholar] [CrossRef]
  80. Ni, Z.; Wang, R. Performance Evaluation of Secure Multi-party Computation on Heterogeneous Nodes. arXiv 2020, arXiv:2004.10926. [Google Scholar] [CrossRef]
  81. Brands, S.A. Rethinking Public Key Infrastructures and Digital Certificates: Building in Privacy, 2nd ed.; MIT Press: Cambridge, MA, USA, 2001. [Google Scholar]
  82. Knox, D.A.; Adams, C. Digital credentials with privacy-preserving delegation. Secur. Commun. Netw. 2011, 4, 825–838. [Google Scholar] [CrossRef]
  83. Costan, V.; Devadas, S. Intel SGX Explained. 2016. Available online: https://eprint.iacr.org/2016/086 (accessed on 1 October 2025).
  84. Fisch, B.; Vinayagamurthy, D.; Boneh, D.; Gorbunov, S. IRON: Functional Encryption using Intel SGX. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, Dallas, TX, USA, 30 October–3 November 2017; pp. 765–782. [Google Scholar] [CrossRef]
  85. Krahn, R.; Trach, B.; Vahldiek-Oberwagner, A.; Knauth, T.; Bhatotia, P.; Fetzer, C. Pesos: Policy enhanced secure object store. In Proceedings of the Thirteenth EuroSys Conference, Porto, Portugal, 23–26 April 2018; pp. 1–17. [Google Scholar] [CrossRef]
  86. Wagner, P.G.; Birnstill, P.; Beyerer, J. Distributed Usage Control Enforcement through Trusted Platform Modules and SGX Enclaves. In Proceedings of the 23nd ACM on Symposium on Access Control Models and Technologies, Indianapolis, IN, USA, 13–15 June 2018; pp. 85–91. [Google Scholar] [CrossRef]
  87. Birrell, E.; Gjerdrum, A.; Van Renesse, R.; Johansen, H.; Johansen, D.; Schneider, F.B. SGX Enforcement of Use-Based Privacy. In Proceedings of the 2018 Workshop on Privacy in the Electronic Society, Toronto, ON, Canada, 15 October 2018; pp. 155–167. [Google Scholar] [CrossRef]
  88. Gao, M.; Dang, H.; Chang, E.C. TEEKAP: Self-Expiring Data Capsule using Trusted Execution Environment. In Proceedings of the Annual Computer Security Applications Conference, Virtual Event, 6–10 December 2021; pp. 235–247. [Google Scholar] [CrossRef]
  89. Zhang, X.; Li, X.; Miao, Y.; Luo, X.; Wang, Y.; Ma, S.; Weng, J. A Data Trading Scheme With Efficient Data Usage Control for Industrial IoT. IEEE Trans. Ind. Inform. 2022, 18, 4456–4465. [Google Scholar] [CrossRef]
  90. Trusted Platform Module Technology Overview. 2025. Available online: https://learn.microsoft.com/en-us/windows/security/information-protection/tpm/trusted-platform-module-overview (accessed on 1 October 2025).
  91. CBI to Investigate Role of ‘Political Pressure’ in Cases Involving Vijay Mallya: Report. 2016. Available online: https://scroll.in/latest/805034/cbi-to-investigate-role-of-political-pressure-in-cases-involving-vijay-mallya-report (accessed on 1 October 2025).
Figure 1. Top-level workflow of BPPM.
Figure 1. Top-level workflow of BPPM.
Computers 14 00484 g001
Figure 2. Code reuse scenario.
Figure 2. Code reuse scenario.
Computers 14 00484 g002
Figure 3. Data structures PC and D and their relationship.
Figure 3. Data structures PC and D and their relationship.
Computers 14 00484 g003
Figure 4. Formal representation of P r o t B P P M .
Figure 4. Formal representation of P r o t B P P M .
Computers 14 00484 g004
Figure 5. Formal representation of P r o g E c l v .
Figure 5. Formal representation of P r o g E c l v .
Computers 14 00484 g005
Figure 6. Ideal functionality, F B P P M .
Figure 6. Ideal functionality, F B P P M .
Computers 14 00484 g006
Figure 7. Communication cost comparison.
Figure 7. Communication cost comparison.
Computers 14 00484 g007
Figure 8. Processing time comparison.
Figure 8. Processing time comparison.
Computers 14 00484 g008
Figure 9. Redactable signature performance comparison.
Figure 9. Redactable signature performance comparison.
Computers 14 00484 g009
Table 1. Notations and acronyms.
Table 1. Notations and acronyms.
NotationExplanation
D O Data owner/service user.
D U i , j j t h Data user/service provider of the i t h layer.
tree ( D U i , j ) Data usage sub-tree rooted at D U i , j . tree ( D U 1 , 1 ) denotes the entire data usage tree.
parent ( D U i , j ) The parent node of D U i , j within the data usage tree.
children ( D U i , j ) The set of child nodes of D U i , j within the data usage tree.
C P Code provider.
P D S i , j Collection of proposed data processing statements of D U i , j . It contains all the data processing statements of tree ( D U i , j ) and is specified within the privacy policy document of D U i , j .
L P D S i , j A subset of P D S i , j . D U i , j performs these data processing locally.
P D S D O Collection of data processing statements agreed by D O .
D S D O Data set corresponding to P D S D O .
P r o g E c l v The trusted base code of the data user’s enclave.
DStructure through which personal data is sent.
P C Structure through which processing consent is sent.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Paul, S.K.; Knox, D.A. Bidirectional Privacy Preservation in Web Services. Computers 2025, 14, 484. https://doi.org/10.3390/computers14110484

AMA Style

Paul SK, Knox DA. Bidirectional Privacy Preservation in Web Services. Computers. 2025; 14(11):484. https://doi.org/10.3390/computers14110484

Chicago/Turabian Style

Paul, Sumit Kumar, and D. A. Knox. 2025. "Bidirectional Privacy Preservation in Web Services" Computers 14, no. 11: 484. https://doi.org/10.3390/computers14110484

APA Style

Paul, S. K., & Knox, D. A. (2025). Bidirectional Privacy Preservation in Web Services. Computers, 14(11), 484. https://doi.org/10.3390/computers14110484

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop