1. Introduction
With the rapid development of three-dimensional (3D) acquisition technologies, 3D sensors are increasingly available and affordable, such as light detection and ranging (LIDAR) sensors, stereo cameras, and 3D scanners. Complemented with two-dimensional (2D) images, 3D data acquired by sensors demonstrate rich geometric, shape, and scale information such that they provide an opportunity for a better understanding of surrounding environments for machines [
1]. In general, 3D data can be represented with different formats, such as depth images, point clouds, meshes, and volumetric grids. When compared to other 3D data formats, 3D point cloud representation preserves the original geometric information along with associate attributes in a 3D space without any discretization [
1]. Therefore, point clouds have been widely adopted in numerous application fields, including 3D scanning and modeling, environmental monitoring, agricultural and forestry, bio-medical imagery, and so on [
2].
Recently, deep learning (DL) on point clouds has been thriving in many scene-understanding-related applications, such as virtual/augmented reality (VR/AR), autonomous driving, and robotics. Nevertheless, the massive amount of point cloud data aggregated from distributed 3D sensors also poses challenges for securing data collection, management, storage, and sharing. By using signal processing or neural network techniques, several efficient point cloud compression (PCC) methods [
3] have been proposed to reduce the bandwidth of wireless networks or the storage space of 3D point cloud raw data. However, there are still many efforts to achieve efficient end-to-end data delivery and optimal storage management. From the architecture aspect, conventional point-cloud-based applications rely on centralized cloud servers for data collection and analysis. Such a centralized manner is prone to single-point failures because any successful attacks such as distributed denial-of-service (DDoS) to the control (or data) server may paralyze the entire system. Other than that, a centralized server that manages 3D sensors and stores point clouds under a distributed network environment may lead to performance bottlenecks (PBNs), and it is vulnerable to data breaches caused by curious third parties and security threats in data acquisition, storage, and sharing process.
Because of several key features, such as the separation of the control and data planes, logically centralized control, the global view of the network, and the ability to program the network, software-defined networking (SDN) can greatly facilitate big data acquisition, transmission, storage, and processing [
4]. At the same time, Blockchain has been recognized as a promising solution for security and privacy in big data applications [
5] with its attractive properties, including decentralization, immutability, transparency, and availability. Therefore, combining SDN and Blockchain demonstrates great potential to revolutionize centralized point cloud systems and address the aforementioned issues.
In this paper, we propose a secure-by-design networking infrastructure called SAUSA, which leverages SDN and Blockchain technologies to secure access, usage, and storage of 3D point clouds datasets in their life-cycle. SAUSA adopts a hierarchical SDN-enabled service network to provide efficient and resilient point cloud applications. Network intelligence based on dynamic resource coordination and SDN controllers ensures optimal resource allocation and network configuration for point cloud applications that demand various QoS requirements. To address security issues in point cloud data collection, storage, and sharing, we design a lightweight and secure data authentication framework based on the decentralized security fabric.
By leveraging a hybrid on-chain and off-chain storage strategy, data owners can store the encrypted meta-data of point clouds into distributed data storage (DDS), which is more reliable than existing solutions [
6,
7] that use cloud data servers to store audit proofs. In addition, encrypting meta-data on DDS also protects the privacy of data owners. Data owners place the Swarm hash of meta-data and the access control policy on the Blockchain (on-chain storage), while the original point clouds are saved by private storage servers. Thanks to the transparency and auditability properties of Blockchain, data owners have full control over their point cloud data, and authorized users can verify shared data without relying on any trusted third party authority. Hence, the point cloud data integrity verification is more credible in a distributed network environment.
In summary, the key contributions of this paper are highlighted as follows:
- (1)
The comprehensive architecture of SAUSA is introduced, which consists of a hierarchical SDN-enabled point cloud service network and a decentralized security fabric, and key functionalities for network traffics based on point cloud applications are described;
- (2)
The core design of the data authentication framework is illustrated in detail, especially for the workflow in data access control, integrity verification, and the structure of hybrid on-chain and off-chain storage;
- (3)
A proof-of-concept prototype is implemented and tested under a physical network that simulates the case of point cloud data sharing across multiple domains. The experimental results verify the efficiency and effectiveness of our decentralized data access authorization and integrity verification procedures.
The remainder of the paper is organized as follows:
Section 2 provides the background knowledge of SDN and Blockchain technologies and reviews the existing state-of-the-art on Blockchain-based solutions to secure big data systems.
Section 3 introduces the rationale and system architecture of SAUSA. The details of data the authentication framework are explained in
Section 4.
Section 5 presents the prototype implementation, experimental setup, performance evaluation, and security analysis. Finally,
Section 6 summarizes this paper with a brief discussion on current limitations and future directions.
3. Design Rationale and System Architecture
Aiming at a self-adaptive and secure-by-design service architecture for assurance- and resilience-oriented 3D point cloud applications, SAUSA leverages SDN to achieve efficient resource coordination and network configuration in point cloud data processing and delivery. By combining Blockchain and distributed data storage (DDS) to build a decentralized authentication network, SAUSA is promising to guarantee the security and privacy of data access, usage, and storage in 3D point cloud applications.
Figure 1 demonstrates the SAUSA architecture, which consists of two sub-frameworks: (i) a hierarchical SDN-enabled point cloud service network; (ii) a decentralized security fabric based on Blockchain and DDS.
3.1. Hierarchical SDN-Enabled Point Cloud Service Network
As a potential technology to improve network performance and reduce management cost, the rationale of SDN is utilized to design a conceptual network architecture for multi-domain PC applications. Since this paper focuses on the Blockchain-based authentication network architecture, the key components and the workflow of the SDN are briefly described. The detailed SDN designs will be presented in our future work. The left part of
Figure 1 shows the hierarchy of a point cloud service network according to point cloud application stage: acquisition, aggregation, and analytic. The point cloud data layer acts as an infrastructure layer including multiple domain networks, which are responsible for raw data collection, processing, and delivery. In each domain, point cloud centers interconnect with each others via forwarding switches. The 3D sensors generate cloud points and send them back to the point cloud centers, which are actually local servers, to process and store the data. Given the decisions made by the SDN controllers, the forwarding switches can forward the data traffic flows efficiently to satisfy the QoS requirements.
The network intelligence and control logic of each domain network are performed by the SDN controller, which can be deployed on fog or cloud computing platforms. By using a pre-defined southbound API, each SDN controller can either update the configuration of forwarding switches to change the network operations or synchronize the status to have the global view of a domain network. Northbound interfaces allow an SDN controller to interact with the upper-level application layer, such as providing domain network status to the system monitoring and accepting the network operation policies’ update. Therefore, these SDN controllers construct a control layer, which acts as a broker between point cloud applications and fragmented domain networks, and they can provide network connectivity and data services among heterogeneous domain networks.
The application layer can be seen as a “system brain” to manage the physical resources of the point cloud data layer with the help of SDN controllers. The application management maintains registered users and their service requirements, while the system monitoring can provide the global status of the point cloud ecosystem. Given the inputs from application management and system monitoring, the dynamic resource coordination adopts machine learning (ML) algorithms, which achieve fast resource (e.g., computation, network, and storage) deployment and efficient service re-adjustments with QoS guarantees.
3.2. Decentralized Security Fabric
As the right part of
Figure 1 shows, the decentralized security fabric consists of two sub-systems: (i) a security services layer based on the microservice-oriented architecture (MoA); (ii) a fundamental security networking infrastructure atop the Blockchain and DDS. To address heterogeneity and efficiency challenges such as developing and deploying security services in the distributed network environment, our security services layer adopts container technology to implement microservices for PC applications. The key operations and security schemes are decoupled into multiple containerized microservices. As container is loss-coupled from the remaining system with the OS-level isolation, these microservices can be independently updated, executed, and terminated. Each microservice unit (or container) exposes a set of RESTful web service APIs to users of PC applications and utilizes local ABIs to interact with the SCs deployed on the Blockchain.
The Blockchain network acts as a decentralized and trust-free platform for security services, and it uses a scalable PoW consensus protocol to ensure the immutability and integrity of the on-chain data on the distributed ledger if the majority (51%) of the miners are honest. The security mechanisms are implemented by self-executing SCs, which are deployed on the Blockchain by trusted oracles such as system administrators. Thus, the security service layer can provide secure and autonomous microservices in a decentralized manner. To reduce the overheads of directly recording large data on the distributed ledger, we bring DDS into the security infrastructure as off-chain storage, which is built on a Swarm [
26] network. Unlike the IPFS, which does not guarantee storage, Swarm maintains content-addressed DHT and relies on data redundancy to offer secure and robust data services. Moreover, the inclusion of incentives makes Swarm more flexible to integrate with the Ethereum Blockchain. The meta-data of point clouds and operation logs that require a heterogeneous format and various sizes are encrypted and then saved into the DDS. Raw data on the DDS can be easily addressed by their references (Swarm hash), which are recorded on the Blockchain for audition and verification. A Swarm hash has a much smaller size (32 or 64 bytes) than its raw data; therefore, it is promising to improve efficiency in transaction propagation and privacy preservation without directly exposing raw data on the transparent Blockchain.
4. Blockchain-Based Lightweight Point Cloud Data Authentication Framework
This section presents the details of the decentralized and lightweight data authentication framework. SAUSA guarantees security and privacy preservation for point clouds collection, storage, and sharing. We firstly introduce the participants and workflow in the framework. Then, we describe the structure of hybrid on-chain and off-chain storage. Finally, we explain the data access authorization and integrity verification procedures.
4.1. Data Access Control and Integrity Verification Framework
Figure 2a shows the framework of secure data access, storage, and usage based on Blockchain and the DDS. In this framework, owners can upload point clouds generated by 3D sensors to their private server, which acts as a service provider for the users of applications. By storing the access control policy and audit proof in the Blockchain, each owner can fully control its data, and the authorized user can verify the data stored on the private server. The overall workflow is divided into three stages according to the 3D point cloud life-cycle.
Data storage: Owners and their private servers are in the same domain, and they can exchange secret keys via a trustworthy key distribution center (KDC). As a result, an owner and its private server can use shared secret keys to establish a secure communication channel for PC data transmission. In Step 1, the owner uses a shared secret key to encrypt point cloud data and then sends encrypted data to a private server. After receiving point clouds in Step 2, the private server stores encrypted into local storage and then records meta-data (e.g., configuration and audit proof) on the DDS. In a meta-data item, the configuration contains the URL address of a private server and other data properties such as the format and size, and the audit proof consists of an authenticator of raw data and a signature signed by the data owner. In Step 3, a site of the DDS stores the received and calculates a Swarm hash as a unique reference to address on the DDS. Finally, the Swarm hash is returned to the private server, and then, the private server transfers the Swarm hash to the data owner.
Data access control: The data access control (AC) process is built on a capability-based access control (CapAC) scheme [
27]. In Step 4a, a data user contacts a data owner to negotiate an AC policy for PC data sharing. Then, the data owner verifies the data user’s identity and authorizes access rights for the data user given pre-defined AC policies. In Step 4b, the data owner stores the Swarm hash of the meta-data along with the assigned access rights in a distributed ledger (Blockchain). As long as the AC data have been successfully saved in an AC token on the Blockchain, a
is returned to the data owner. Finally, the data owner sends the
back to the data user as a notification, as Step 4c shows. In Step 5, a user first sends data access requests to a private server, which stores
. Then, the private server retrieves the AC policy from the Blockchain and checks if the access rights assigned to the user are valid, as Step 6 shows. If the access authentication is successful, the private server uses shared secret keys to decrypt
and return it to the data user, as Step 7 shows. Otherwise, the private server denies the access requests without sharing the data with unauthorized users.
Data verification: To audit the received from a private server, the user queries the Swarm hash from the Blockchain and then retrieves meta-data from the DDB accordingly, as Step 8 shows. Because meta-data contains the audit proof that was submitted by the data owner when it uploaded , the data user can verify if satisfies the data properties and consistency of the authenticator and signature. In the data verification process, the user first checks if the properties of satisfy the configuration in . Then, it locally calculates the audit proof according to and compares it with the recorded in . If the audit proofs are equal, the data integrity has been guaranteed. Otherwise, the data may be inconsistent with the original version or corrupted during storage or sharing.
4.2. Structure of the Hybrid On-Chain and Off-Chain Storage
In general, a 3D model construction needs multiple segmented point clouds. and each point cloud segment
may have a large data size and demand privacy preservation. Thus, it is impractical to directly store point clouds in a transparent Blockchain for data authentication. To ensure efficient and privacy-preserving data storage and sharing, we adopted a hybrid on-chain and off-chain storage structure in the data authentication framework, as shown in
Figure 2b. In the point cloud data collection stage, the meta-data of the point cloud segments are saved in the DDS, while the raw data are managed by private servers. The meta-data
contain the data configuration (e.g., server address and properties), which is relatively small regardless of the size of the original data. In addition, an audit proof consists of the integrity of the authenticator of a point cloud segment and a signature singed by a data owner, which are byte strings with a small length. Therefore, the small size of the meta-data can greatly reduce the communication cost in the verification process. Furthermore, the meta-data are encrypted and then saved on the DDS, and only authorized users are allowed to query and decrypt the meta-data. It is promising to protect the privacy of data owners without exposing sensitive information on the Blockchain and DDS.
In our Swarm-based DDS, each of the stored meta-data has a unique Swarm hash as the addressable reference to the actual data storage, and any change of the stored data will lead to an inconsistent Swarm hash. Therefore, recording the Swarm hash on an immutable distributed ledger provides the non-tamperability property of the meta-data on the DDS. To verify the data integrity of a large point cloud file, the Swarm hash of meta-data is considered as a digest , which is located on a leaf of the Merkle tree. Then, we use such an ordered list of digests to construct a binary Merkle tree , where is the number of meta-data. Modifying digests or changing the sequential order will lead to different root hash values of the Merkle tree. Therefore, is also stored on the distributed ledger as the data integrity proof of the entire file. In the data verification process, a data user can query digests from the Blockchain and then in parallel validate the integrity of the segment data. Then, it can easily reconstruct the Merkle tree of the digests and obtain . Finally, the data integrity of the entire point cloud file can be efficiently verified by comparing with on the distributed ledger.
4.3. Decentralized Data Authentication Procedures
The Blockchain-based data access authorization and integrity verification procedures are presented as pseudo-code in Algorithm 1. Given a list of meta-data M, data owner traverses each meta-data and uploads them to the DDS, then appends the returned Swarm hash to , as Lines 2–6 show. Following that, the data owner feeds to function , which will construct a binary Merkle tree and output the root hash (Line 7). Finally, the data owner calls the smart contract function to record and into the distributed ledger as the public audit proof, which can be uniquely addressed by (Line 8).
In the data verification procedure, the data user firstly uses
as the input to call the smart contract function
, which will return the public audit proof information stored on the Blockchain (Line 10). Regarding token validation, the data user performs function
on the received
to recover the root hash
, then checks if
is consistent with the audit proof
. If the validation fails, it directly returns a false result. Otherwise, it goes ahead to the meta-data verification. Given the received
, the data user traverses each digest
, which is used to download the meta-data
from the DDS. Any wrong digest or corrupted meta-data will lead to a
result returned by the function
. Finally, a valid list of the meta-data is returned only if all meta-data can be successfully retrieved, as Lines 16–23 show.
Algorithm 1 The data access authorization and integrity verification procedures |
- 1:
procedure: authorize_data() - 2:
= [] - 3:
for in M do - 4:
upload_data - 5:
.append() - 6:
end for - 7:
BMT - 8:
Contract.set_dataAC - 9:
procedure: verify_data(token_id) - 10:
Contract.query_dataAC - 11:
BMT - 12:
if then - 13:
return - 14:
end if - 15:
= [] - 16:
for in do - 17:
download_data - 18:
if then - 19:
return , - 20:
end if - 21:
.append() - 22:
end for - 23:
return ,
|
5. Experimental Results and Evaluation
In this section, the experimental configuration based on a proof-of-concept prototype implementation is described. Following that, we evaluate the performance of running SAUSA based on the numerical results, which is especially focused on the impact of the Blockchain on the system performance. In addition, a comparative evaluation of the previous works highlights the main contributions of SAUSA in terms of the lightweight Blockchain design, performance improvement, and security and privacy properties. Moreover, we analyze the security properties and discuss potential attacks.
5.1. Prototype Implementation
We used the Python language to implement a proof-of-concept prototype including client and server applications and microservices. A micro-framework called Flask [
28] was used to develop RESTful APIs for the applications and microservices. We used standard python library cryptography [
29] to develop all security primitives, such as the digital signature, symmetric cryptography (Fernet), and hash function (SHA-256). Solidity [
30] was used for smart contracts’ implementation and testing, and all SCs were deployed on a private Ethereum test network.
The experimental infrastructure worked under a physical local area network (LAN) environment and included a cloud server and several desktops and Raspberry Pi (Rpi) boards.
Figure 3 shows the experimental setup for our prototype’s validation. A desktop emulated the private server, which stored the point clouds data managed by the data owner. To evaluate the impact of the hardware platforms on the data user side, both the Rpis and desktops were used to simulate a user client that requests data access. The private Ethereum network consisted of six miners, which are deployed on the cloud server as six containers separately, and each containerized miner was assigned one CPU core, while the other microservice containers that were deployed on the desktops and RPis worked in light-node mode without mining blocks. All participants used Go-Ethereum [
31] as the client application to interact with the smart contracts on the private Ethereum network. Regarding the Swarm-based DDS, we built a private Swarm test network consisting of five desktops as the service sites.
Table 1 describes the devices that were used to build the experimental testbed.
5.2. Performance Evaluation
This section evaluates the performance of executing the operations in the data authorization and verification. In the data authorization process, the desktop launches a transaction, which encapsulates the Swarm hash of the meta-data in the Blockchain, and then, the states of the SC can be updated until a block containing transactions committedby the miners. Thus, we evaluated the end-to-end latency and gas usage during a successful data authorization operation. According to Algorithm 1, the whole data integrity verification procedure is divided into three steps: (1) the client (Rpi or desktop) queries the data token containing the Swarm hash of the meta-data and the root from the Blockchain; (2) the client validates the Merkle root and Swarm hash in the data token; (3) the client retrieves the meta-data from the DDS and verifies them. Therefore, we evaluated the processing time of the individual steps on different platforms by changing the number of meta-data (). Finally, we analyzed the computational overheads incurred by retrieving the meta-data from the DDS and performing symmetric encryption on the meta-data. We conducted 50 Monte Carlo test runs for each test scenarios and used the averages to measure the results.
5.2.1. End-to-End Latency and Gas Usage by Data Authorization
We scaled up
in the data authorization scenarios to evaluate how the size of the ordered list of digests (Swarm hash) impacts the performance. As a transaction’s committed time is greatly influenced by the Blockchain confirmation time, we observed that all data authorization operations with different
demonstrated almost a similar end-to-end latency (about 4 s) in our private Ethereum network. Regarding the computational complexity and processed data required by the SC, the gas used by the transactions may vary.
Figure 4 shows the gas usage by data authorization transactions as
increases. The longer the ordered list of digests, the more gas is used per each transaction that stores the data on the Blockchain. Hence, recording the Swarm hash, rather than the meta-data or even the raw data on the distributed ledger, can greatly reduce the gas consumption of the Blockchain transaction.
5.2.2. Processing Time by Data Verification
Figure 5 shows the average delays to evaluate how a data token query function of the SC can be successfully handled by the client as
increases from 5 to 200. Regarding a larger
, the query token procedure of the SC needs more computational resources to process the data on the distributed ledger. Thus, the delays of querying a data token on both platforms scale linearly with
with the same gain. Due to different computational resources, the processing time of the data token query on the Rpis is almost double that on the desktops.
Figure 6 shows the computational overheads by validating token the data on the client side as
changes. The data token data validation requires reconstructing the binary Merkle tree of the ordered list of Swarm hashes, which results in a traversal complexity of
. Then, the root hash can be used as the fingerprint for all the meta-data to check for inconsistencies, which requires a computational complexity of
. Finally, the computational overheads incurred by verifying the token data are scale linearly with
. Computing the root hash of the binary Merkle tree demands intensive hash operations such that the computational power of the client machines dominates the performance of the data token validation. Therefore, a larger
in the data token validation brings more delays on the Rpis than the desktops. However, the impact was almost marginal in our test scenarios such that
.
Figure 7 shows the processing time of verifying the meta-data on the client side as
increases. In the meta-data verification stage, a client uses the Swarm hash list in the data token to sequentially retrieve
meta-data from the DDS, which results in a communication complexity of
. Regarding the fixed bandwidth of the test network, increasing
allows for a larger round-trip time (RTT) and more computational resources in meta-data transmission. As a result, the delays of verifying a batch of meta-data are scale linearly with
. Unlike the desktops, the Rpis have limited computational resource to handle each data transmission. Therefore, the Rpis take a longer time to verify the same amount of meta-data than the desktops do.
5.2.3. Computational Cost by Preserving Meta-Data Privacy
In our test scenario, the average size of the meta-data file was about 2 KB.
Figure 8 shows the processing time of accessing data from (to) the DDS and executing encryption over a meta-data file on the client side. The delays incurred by uploading a meta-data file to the Swarm network and then downloading it from a service site are almost the same on the desktops and Rpis. However, the RPis took longer to encrypt and decrypt the data than desktops did due to the limited computational and memory resources. Compared to the Swarm operations, performing encryption algorithms on meta-data brings extra overheads in the data verification process on both platforms. As a trade-off, using encrypted meta-data to ensure privacy preservation is inevitable at the cost of a longer latency in the service process.
5.3. Comparative Evaluation
Table 2 presents the comparison between our SAUSA and previous Blockchain-based solutions to big data applications. The symbol
√ indicates that the scheme guarantees the security properties or implements some prototypes to evaluate system performance or other specifications. The symbol × indicates the opposite case. Unlike existing solutions, which lack details on the optimal network framework for QoS or evaluations on the impact of applying Blockchain to big data applications, we illustrate a comprehensive system architecture, along with details on the SDN-based service and lightweight data authentication framework. We especially evaluated the performance (e.g., network latency, processing time, and computational overheads) of the Blockchain-enabled security mechanism in the data access authentication and integrity verification process.
Regarding storage optimization and privacy preservation for point cloud data sharing, the hybrid on-chain and off-chain data storage structure not only reduces the communication and storage overheads by avoiding directly saving large volumes of raw data or audit proofs in Blockchain transactions, it also protects sensitive information by only exposing the references of encrypted meta-data on the transparent distributed ledger as the fingerprint proof. Unlike existing solutions, which rely on a centralized off-chain storage (e.g., centralized fog server or storage server) to store audit proofs, using a decentralized Swarm network as the off-chain storage is promising to enhance the robustness (availability and recoverability) of point cloud data sharing in multi-domain applications.
5.4. Security and Privacy Analysis
In this section, we first discuss the security and robustness of SAUSA and evaluate the impact of several common attacks on the proposed scheme. Then, we briefly describe the privacy preservation of SAUSA. Regarding the adversary model, we assumed that the capability of attackers is bounded by probabilistic polynomial time (PPT) such that they cannot compromise the basic cryptographic primitives, such as finding hash function collisions or breaking the cipher-text without knowing the secret keys. Moreover, we assumed that an adversary cannot control the majority of miners within the Ethereum network.
5.4.1. Sybil Attack
In a Sybil attack, an adversary can forge multiple fake identities to create malicious nodes. As a result, these malicious nodes can control the DDS network or even the consensus network to some extent. However, in the proposed SAUSA, permissioned network management provides the basic security primitives, such as the PKI and KDC for identity authentication and message encryption. Thus, all nodes with invalid identities are prevented from joining the domain networks. Furthermore, properly defined AC strategies are promising to reduce the impact of Sybil attacks across different application domains.
5.4.2. Collusion Tamper Attack
An adversary can compromise multiple nodes that collude to tamper with the PC data to influence the accuracy of 3D object detection and tracking. The collusion tamper attack could be easily achieved, especially for a small network. Our SAUSA anchors the meta-data of the original PC data to the Ethereum Blockchain. Once transactions encapsulating the meta-data are finalized on the immutable public distributed ledger, it is difficult for an adversary to attempt to revert the transactions or the status of smart contracts by controlling the majority (51%) of the nodes within a public Ethereum network. As the meta-data recorded on the Blockchain can be used as audit proofs for verifying the integrity of data on local private servers, the possibility of collusion tampering is reduced.
5.4.3. DDoS Attack
In conventional cloud-based systems, an adversary can accessmultiple compromised computers, such as using bots, to send huge volumes of TCP request traffic to target cloud servers in a short period of time. As a result, unexpected traffic jams by the DDoS attack overwhelm centralized servers such that service and networking functions become unavailable. Our solution adopts a DDS to achieve efficient and robust meta-data storage and distribution. As the DDS uses a DHT-based protocol to coordinate and maintain meta-data access service sites over a P2P network, it is hard for an adversary to disrupt the meta-data service by launching DDoS attacks to target service sites. Moreover, our data authentication framework relies on SCs deployed on Ethereum to ensure decentralization. Therefore, our approach can mitigate the impact of DDoS attacks better than centralized data auditing methods.
5.4.4. Privacy Preservation of PC Data
In data acquisition, users rely on trusted private servers to protect the raw PC data by AC policies and encryption algorithms. In the data sharing process, only encrypted meta-data along with references are exposed to the public network. The decentralized data authentication framework prevents attackers from violating access privileges or inspecting any sensitive information. However, the prototype of the SAUSA presented in this paper has no integrated privacy protection module to deter data privacy breach by honest or curious users, such as dishonest data users or private servers who attempt to obtain private information from PC data without deviating from pre-defined security protocols. Therefore, a data-privacy-preserving component based on differential privacy or secure multi-party computation is needed to guarantee PC data privacy, and we leave this for our future work.