Next Article in Journal
Efficient Removal of Tartrazine Yellow Azo Dye by Electrocoagulation Using Aluminium Electrodes: An Optimization Study by Response Surface Methodology
Previous Article in Journal
Exploring the Oncogenic Potential of Bisphenol F in Ovarian Cancer Development
Previous Article in Special Issue
Intrusion Detection Method Based on Preprocessing of Highly Correlated and Imbalanced Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient and Privacy-Preserving Decision Tree Inference via Homomorphic Matrix Multiplication and Leaf Node Pruning

1
Graduate School of Engineering, Kobe University, Kobe 657-8501, Japan
2
Cybersecurity Research Institute, National Institute of Information and Communications Technology, Tokyo 184-8795, Japan
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2025, 15(10), 5560; https://doi.org/10.3390/app15105560
Submission received: 10 April 2025 / Revised: 7 May 2025 / Accepted: 13 May 2025 / Published: 15 May 2025
(This article belongs to the Special Issue Intelligent Systems and Information Security)

Abstract

:
Cloud computing is widely used by organizations and individuals to outsource computation and data storage. With the growing adoption of machine learning as a service (MLaaS), machine learning models are being increasingly deployed on cloud platforms. However, operating MLaaS on the cloud raises significant privacy concerns, particularly regarding the leakage of sensitive personal data and proprietary machine learning models. This paper proposes a privacy-preserving decision tree (PPDT) framework that enables secure predictions on sensitive inputs through homomorphic matrix multiplication within a three-party setting involving a data holder, a model holder, and an outsourced server. Additionally, we introduce a leaf node pruning (LNP) algorithm designed to identify and retain the most informative leaf nodes during prediction with a decision tree. Experimental results show that our approach reduces prediction computation time by approximately 85% compared to conventional protocols, without compromising prediction accuracy. Furthermore, the LNP algorithm alone achieves up to a 50% reduction in computation time compared to approaches that do not employ pruning.

1. Introduction

It has been a long time since data was referred to as “the oil of the 21st century”. Today, many of the world’s most valuable companies thrive by collecting and monetizing vast amounts of data. However, such data often include sensitive personal information, making it inaccessible for public or collaborative use—not only due to commercial interests but also to mitigate privacy risks. Despite the significant potential of data sharing in addressing pressing societal challenges such as crime prevention, public health, and elder care, privacy concerns continue to severely limit the practical use of sensitive datasets across institutional boundaries. To address this challenge, privacy-preserving machine learning (PPML) has emerged as a key paradigm for enabling data-driven insights without compromising confidentiality.
One of the most prominent PPML frameworks is federated learning (FL) [1], which allows multiple parties to collaboratively train machine learning models without sharing their raw data. Each participant trains a local model and shares only model updates with a central server, preserving data privacy. However, FL assumes that participants have sufficient computational capacity for local training. In many real-world settings, particularly for devices or institutions with limited resources, local computation must be outsourced, raising new privacy concerns–even under semi-honest threat models.
This paper focuses on another type of PPML scenario in which a local entity securely outsources prediction computations. In our setting, both the data and the model remain encrypted, ensuring the confidentiality of sensitive user input and the intellectual property of the model provider. This setup is especially effective in protecting against model inversion attacks [2], which constitute a well-known vulnerability in machine learning-as-a-service (MLaaS) platforms. To implement secure outsourced prediction, we employ homomorphic encryption (HE) [3], particularly in its somewhat homomorphic variant, named somewhat homomorphic encryption (SHE) [4], which supports a limited number of encrypted additions and multiplications with significantly improved efficiency over fully homomorphic schemes. This enables practical encrypted inference in real-world scenarios. Among the various machine learning models, decision trees are particularly attractive for privacy-preserving inference due to their interpretability and low computational cost. As a result, a growing body of research has focused on designing cryptographic protocols for privacy-preserving decision tree (PPDT) inference that minimize both communication and computational overhead while preserving privacy.

1.1. Related Work

Existing PPDT protocols can be broadly classified based on the cryptographic primitives they employ.
HE-based protocols: 
HE supports computation over encrypted data without requiring decryption. Akavia et al. [5] used low-degree polynomial approximations to support non-interactive inference with communication costs independent of tree depth. Frery et al. [6], Hao et al. [7], and Shin et al. [8] adopted the TFHE, BFV, and CKKS schemes, respectively, for efficient computation with low multiplicative depth. Cong et al. [9] reported ciphertext size comparisons and proposed homomorphic traversal algorithms across various commonly used HE schemes. Most existing HE-based PPDT protocols are designed for two-party settings, similar to the linear-function-based scheme proposed by Tai et al. [10].
Protocols based on other cryptographic primitives: 
Zheng et al. [11] employed additive secret sharing in a two-server setting, ensuring that no single party gained access to both the model and the data, while maintaining low communication overhead. MPC-based approaches, such as those reported b Wu et al. [12], combine additive HE with oblivious transfer (OT), offering strong privacy guarantees but often suffering from scalability issues, particularly with deep trees. Differential privacy (DP), although more commonly used during training [13], can complement cryptographic inference methods by adding noise to outputs to mask sensitive patterns. However, DP typically compromises utility and operates orthogonally to encrypted inference techniques.
Within this landscape, several notable HE-based PPDT approaches stand out. Tai et al. [10] proposed a prediction protocol using linear functions and DGK-based integer comparison [14], reducing complexity at the cost of requiring bitwise encryption. Lu et al. [15] enhanced efficiency via the XCMP protocol, based on Ring-LWE SHE, although it struggled with large input bit lengths and unstable ciphertexts. Saha and Koshiba [16] addressed this with SK17, encoding integer bits as polynomial coefficients to support larger comparisons. Wang et al. [17] further improved SK17 by introducing a faster comparison protocol and a non-interactive variant with reduced ciphertext functionality.
Despite these advancements, existing protocols still face challenges in simultaneously achieving high efficiency, scalability, low communication overhead, and full tree structure confidentiality—especially under realistic semi-honest adversary models.

1.2. Our Contributions

To overcome the above-mentioned limitations, we propose a novel three-party PPDT inference protocol that achieves secure, structure-hiding, and communication-efficient decision tree predictions over encrypted data and models. The key innovations of our work are as follows:
  • Homomorphic matrix multiplication-based inference: Departing from polynomial approximation and linear path evaluation methods [10], we introduce homomorphic matrix multiplication as the primary operation for encrypted path computation. This novel application supports structured and scalable evaluation of encrypted inputs.
  • Leaf node pruning during inference: We propose leaf node pruning at inference time, a novel runtime optimization that reduces the number of nodes involved in computation, significantly improving performance. Unlike traditional model pruning during training, this technique operates during encrypted inference.
  • Structure-hiding inference protocol: The decision tree structure—including internal nodes and branching conditions—is fully hidden from both the client and the server. By ensuring that all path computations are homomorphically encrypted, the protocol mitigates leakage risks present in prior PPDT methods.
  • Semi-interactive three-party architecture: Our protocol requires only one round of interactive communication between the data holder (client) and the outsourced server. No interaction is required from the model holder during inference. This design enables low-latency, real-world deployment scenarios.
To achieve these properties, we adopt the efficient integer comparison protocol by Wang et al. [17] and the homomorphic matrix multiplication techniques from [18,19] for encrypted path evaluation. By integrating homomorphic matrix multiplication, inference-time leaf node pruning, and tree structure hiding into a semi-interactive three-party framework, our method enables efficient, secure, and practical decision tree inference in untrusted environments. We evaluated the proposed PPDT protocol on standard UCI datasets [20] to demonstrate its performance and practicality.
The algorithm design is detailed in Section 3, the experimental results are presented in Section 4, and the conclusions are provided in Section 5.

2. Preliminaries

In this section, we summarize the related approaches used in our proposal. In Section 2.1, we first review the concept of decision tree and the approach of decision tree classification via the linear function that was introduced by Tai et al. in [10], which was concerned with two kinds of secure computations—comparison at each node of tree and linear function calculation for prediction. We focus on how to improve the efficiency of the above two operations:
  • Comparison: We use Wang et al.’s protocol [17] instead of the DGK approach [14] to improve the efficiency of secure comparison in each node.
  • Prediction: We use secure inner product/matrix multiplication to replace linear function in order to outsource prediction while maintaining the tree model secret.
Therefore, we recall the secure comparison scheme proposed by Wang et al. [17] in Section 2.2 and the secure matrix multiplication introduced by Doung et al. [18] in Section 2.3. The two schemes are both constructed on the ring-LWE-based homomorphic encryption (see Appendix A for details). The notations in this paper are listed in Notations Section.

2.1. Existing Decision Tree Classification

2.1.1. Decision Tree

Figure 1 presents an example of a decision tree. The decision tree T : Z N Z is a function that takes as input the feature vector X = [ X 0 , , X i , , X N 1 ] and outputs the class T ( X ) { c j } to which X belongs. Generally, the feature vector space is R N ; however, in this paper, we denote it as Z N because the input is the encrypted attribute data. This paper’s decision tree is a binary tree and consists of two types of nodes: decision nodes and leaf nodes. The decision node D j outputs a Boolean value b j = 1 { X λ j > t j } where λ j [ N ] is the feature vector’s index, and t j is the threshold. A leaf node L k holds the output value score ( L k ) C = { C 0 , , C K 1 } . In a binary tree, there are m decision nodes and m + 1 leaf nodes. For example, Figure 1 shows the case of K = 3 ,   m = 3 , score ( L 1 ) = C 0 ,   score ( L 2 ) = C 1 ,   score ( L 3 ) = C 0 ,   score ( L 4 ) = C 2 .

2.1.2. Decision Tree Classification via Linear Function [10]

The Boolean output of the decision node D j , b j = 1 ( 0 ) indicates that the next node is the left (right) child. For edges e j , 0 and e j , 1 , respectively connecting to the left child and the right child from the decision node D j , we define the corresponding edge cost ec as follows:
ec j , 0 : = 1 b j , ec j , 1 : = b j .
From the root node of the decision tree to each leaf node, only one path is determined. We define the path cost pc k as the sum of the edge costs in Path k —the path from root node to the leaf node L k . The edge cost ec j , 0 and ec j , 1 are determined by b j , which is the comparison result of decision node D j . Thereafter, the path cost for each leaf node is defined by a linear function of b j . For example, the following pc 1 , pc 2 , pc 3 , and  pc 4 denote the path costs for leaf nodes L 1 , L 2 , L 3 , and  L 4 in Figure 1, respectively:
pc 1 = ec 1 , 0 = 1 b 1 , pc 2 = ec 1 , 1 + ec 2 , 0 + ec 3 , 0 = b 1 + ( 1 b 2 ) + ( 1 b 3 ) = 2 + b 1 b 2 b 3 , pc 3 = ec 1 , 1 + ec 2 , 0 + ec 3 , 1 = b 1 + ( 1 b 2 ) + b 3 = 1 + b 1 b 2 + b 3 , pc 4 = ec 1 , 1 + ec 2 , 1 = b 1 + b 2 .
The comparison results b j indicate that the edge cost to the next node is 0, while the edge cost to the other node is 1. As a result, only the path leading to a specific leaf node L k has a total cost pc k = 0 ; all other paths have non-zero costs. The classification result is returned if and only if pc k = 0 , meaning the output corresponds to the label stored in the leaf node L k .
This can be illustrated with a concrete example (see Bob in Figure 1): Bob’s attributes are height < 170 , weight > 60 , and age > 25 , which yield comparison results b 1 = 0 , b 2 = 1 , and  b 3 = 1 , respectively. Substituting these into Equation (2), we compute the path costs as follows: pc 1 = pc 3 = pc 4 = 1 , and only pc 2 = 0 . Therefore, the output for Bob is C 1 , the label held by leaf node L 2 .

2.2. Secure Comparison Protocol

In this subsection, we describe the protocol proposed by Wang et al. [17], which securely computes the comparison of decision trees.

2.2.1. μ -bit Integer Comparison [14]

Assume that Alice and Bob respectively have two μ -bit integers, a and b, and consider how to compare two integers without revealing the values of a and b between Alice and Bob. Let us define the binary vectors for a and b as follows: a b = [ a 0 , , a μ 1 ] and b b = [ b 0 , , b μ 1 ] . In addition, let us define the following two binary vectors a i b and b i b ( 1 i μ 1 ) where the first i-bits are the same as those in a b and b b while the remaining upper bits are set to zeros.
a i b = [ a 0 , , a i 1 , 0 , , 0 ] , b i b = [ b 0 , , b i 1 , 0 , , 0 ] .
To compare the two integers a and b, let us define the following d i :
d i = w i + v i ,
where
w i = a i b b i b , a i b b i b 0 , v i = a i b i + 1 .
Here, w i is an inner product for calculating how many bits are different between a i b and b i b . Therefore, w i = 0 implies that the first i bits of a and b are the same (i.e., [ a 0 , , a i 1 ] = [ b 0 , , b i 1 ] ). Next, we look at the ( i + 1 ) -th bit of a and b when w i = 0 is satisfied. Here, v i = a i b i + 1 could have the three values: 0, 1, or 2. If ( a i , b i ) = ( 0 , 1 ) (i.e., a < b ), v i = 0 ; otherwise, v i = 1 or 2. That is, if  v i = 0 , a < b is satisfied under w i = 0 ; otherwise, a b is satisfied. Therefore, to identify whether a < b is satisfied, we merely need to check if d i in Equation (4) is 0 for any position of i ( i { 0 , 1 , , μ 1 } ).

2.2.2. Packing Method

For a μ -bit length integer u whose binary vector is denoted as u b = [ u 0 , , u μ 1 ] , the following packing polynomials are defined:
poly 1 ( u b ) = i = 0 μ 1 u i x i , poly 2 ( u b ) = d = 1 μ 1 j = 0 d 1 u j x μ d j .
Using this packing method, we have [17]
poly ( d ) = ( poly 1 ( a b ) poly 1 ( b b ) ) ( poly 2 ( a b ) poly 2 ( b b ) + poly 1 3 ) + poly ˜ ( 1 ) ,
where 1 = ( 1 , 1 , , 1 ) denotes the binary vector of the μ -bit integer 2 μ 1 , and poly 1 3 = i = 1 μ x ( μ 1 ) ( i 1 ) and poly ˜ ( 1 ) = poly 1 ( 1 ) poly 1 3 can be computed offline in advance. The coefficient of x i μ ( i = 0 , , μ 1 ) in poly ( d ) is d i .

2.2.3. Secure Comparison Protocol

Wang et al. proposed three enhanced secure comparison protocols in [17]. Here, we recall the most efficient one that uses the packing method defined by Equation (6). There are three participants in this protocol. Alice and Bob who have μ -bit integers a and b, respectively, compare a and b without revealing the data through a server. The server obtains the comparison result, which is the output of the protocol. The protocol is described as follows:
1.
Alice generates a secret–public key pair ( s k , p k ) and sends p k to Bob and the server.
2.
Alice and Bob compute
[ [ a ] ] i : = Enc ( p k , poly i ( a b ) )
and
[ [ b ] ] i : = Enc ( p k , poly i ( b b ) )
for i = 1 , 2 , respectively, and send the results to the server.
3.
The server computes
[ [ d ] ] : = ( [ [ a ] ] 1 [ [ b ] ] 1 ) ( [ [ a ] ] 2 [ [ b ] ] 2 poly 1 3 ) poly ˜ ( 1 ) .
4.
The server masks [ [ d ] ] in encrypted form using random polynomial γ Z p
[ [ d ] ] : = [ [ d ] ] γ
and sends [ [ d ] ] to Alice.
5.
Alice decrypts [ [ d ] ]
d = Dec ( s k , [ [ d ] ] )
and sends d back to the server.
6.
The server unmasks d from d as follows:
d = d γ = i = 0 μ 1 d i x i μ + other terms of degree ,
and then verifies whether any i μ -th term for i = 0 , , μ 1 is 0. If so a < b ; otherwise, a b .

2.3. Ring-LWE-Based Secure Matrix Multiplication

Doung et al. proposed secure matrix multiplication [18] using the packing method proposed by Yasuda et al. [19]. For an -dimension vector U = [ u 0 , u 1 , , u 1 ] , the following two polynomials are defined:
poly 1 ( U ) = m = 0 1 u m x m , poly t ( U ) = m = 0 1 u m x n m .
Note that for any two vectors A = [ a 0 , a 1 , , a 1 ] and B = [ b 0 , b 1 , , b 1 ] , using the above packing method, we have
poly 1 ( A ) × poly t ( B ) = A , B + other terms of degree ,
where the constant term of poly 1 ( A ) × poly t ( B ) provides the inner product A , B .
Let U be a (k, ) matrix and  let U 1 , , U k denote the row vectors of U . For matrix U , the packing method is defined as follows:
poly m a t ( U ) = poly 1 ( U 1 ) + + poly 1 ( U k ) x ( k 1 ) = i = 1 k poly 1 ( U i ) x ( i 1 ) .
Assume that k n (n: dimension of polynomial x n + 1 ). Letting ( k , ) matrix be
A = A 1 A k
and letting B denote an -dimensional vector, we have
poly m a t ( A ) × poly t ( B ) = i = 1 k poly 1 ( A i ) × poly t ( B ) x ( i 1 ) ,
where the coefficient of x ( i 1 ) ( i = 1 , , k ) is A i , B , the inner product of vectors A i and B.

3. PPDT Classification Model

In this section, we propose a PPDT classification model that mainly consists of the following two processing parts:
(1) Path cost calculation and (2) secure integer comparison.
The basic idea of the path cost calculation comes from the Tai et al.’s decision tree classification protocol via linear function [10] where a decision tree model is treated as a plaintext under a two-party computation setting (see Section 2.1). In contrast, we extend the Tai et al.’s protocol such that not only input data but also a decision tree model can be encrypted to hide actual contents between a data provider and a model provider. To actualize this, we propose a secure three-party path cost calculation by extending the PPDT protocol proposed by Tai et al. [10] using the integer comparison protocol described in Section 2.2 that was proposed by Wang et al. [17].

3.1. Computation Model

To address the practical considerations of deploying our proposed algorithm in real-world cloud environments, we consider a scenario in which an organization that owns a decision tree classification model outsources the inference task by sending encrypted data to a cloud service provider (e.g., AWS and Google Cloud). Figure 2 illustrates the structure of our computational model, which involves three entities: the client, who possesses the feature vector to be classified; the model holder, who holds the trained decision tree; and the cloud server, which performs the encrypted computation on behalf of the model holder.
  • The client encrypts the feature vectors and sends them to the model holder, who then encrypts the information needed to calculate the decision tree’s path cost and threshold.
  • The client’s encrypted feature vectors are sent to the server, relaying the model holder, to conceal the information ( λ i ) about which elements of the feature vector threshold of the decision node should compare.
  • On behalf of the model holder, the server performs the necessary calculation for the decision tree prediction and sends the client’s encrypted classification results.
  • The client decrypts the information and obtains the classification results.
Even if the organization does not maintain its own servers, decision tree classification with privacy preservation is possible. The concrete process of the protocol is shown in Section 3.3.

A Representative Application of Online Medical Diagnostics

Machine-learned diagnostic models developed by medical institutions represent valuable intellectual property that is central to their competitive advantage. These institutions seek to offer diagnostic services without disclosing proprietary models, and cloud-based computation provides a practical solution to scale such services while reducing local computational costs.
However, concerns related to data confidentiality and model privacy present challenges for real-world deployment. Clients demand privacy-preserving services that do not expose their sensitive health data, while cloud service providers typically prefer not to manage or store sensitive information due to increased compliance and security risks.
Therefore, enabling secure and efficient inference over encrypted data and models aligns with the interests of all stakeholders. Our approach allows computations to be carried out on encrypted inputs and models, thereby mitigating privacy concerns and reducing the operational burdens associated with sensitive data management on cloud platforms.

3.2. Computation of Path Cost by Matrix × Vector Operation

In our method, we encrypt the path costs described in Section 2.1 and send them to the server to keep decision tree model secrets and allow the server to calculate the path costs. Specifically, we transform the path cost pc k = p k , 0 + p k , 1 b 1 + + p k , m b m for a leaf node L k . Therefore, we introduce two vectors, P k and B, that can be used to calculate the path cost:
pc k = [ p k , 0 , p k , 1 , , p k , m ] , [ 1 , b 1 , , b m ] = P k , B .
Here, B is a comparison result vector while P k is a path vector. We define path matrix P of decision tree T as follows:
P = P 1 P m + 1 .
Therefore, it is possible to replace the calculation of the path cost of the decision tree by P × B . To obtain the correct multiplication result for a ( k , ) matrix and a vector of length using Equation (18), the constraint k n must be satisfied. Path matrix P is an ( m + 1 , m + 1 ) matrix; thus ( m + 1 ) 2 n must be satisfied. Therefore, we divide the unconstrained path matrix P into several rows and compute each of them. Specifically, P is an ( m + 1 , m + 1 ) matrix; thus, m = n / ( m + 1 ) rows can be computed. The path matrix is divided into S = ( m + 1 ) / m matrices. When S > ( m + 1 ) / m , we add vector I = [ 1 , 0 , , 0 ] for several times so that each partitioned matrix has m rows, which hides the size of the decision tree. With the above operations, we obtain S ( m , m + 1 ) path matrices as follows:
P 1 = P 1 P m , P 2 = P m + 1 P 2 m , , P S = P ( S 1 ) m + 1 P m + 1 I I .
Therefore, the secure matrix × vector operation in Section 2.3 allows us to securely compute the path cost with the encrypted path matrix.

3.3. Proposed Protocol

The protocol’s detailed procedure is illustrated in Figure 3 and Algorithm 1. It comprises eight steps involving data transmission and computation. Let K represent the number of classes in the classification task C = { C 0 , C 1 , , C K 1 } .
The following provides a step-by-step description of the protocol:
Step 1 (client): 
The client generates a secret–public key pair ( s k , p k ) and sends p k to the model holder and server.
Step 2 (client): 
The client encrypts each element of a feature vector by packing it using Equation (6).
[ [ X i ] ] 1 = Enc ( p k , poly 1 ( x i b ) ) , [ X i ] ] 2 = Enc ( p k , poly 2 ( x i b ) ) .
For i = 1 , , N , the client sends ciphertexts to the model holder.
Step 3 (model holder): 
For j = 1 , , m , the model holder encrypts threshold t j by packing it using the Equation (6).
[ [ t j ] ] 1 = Enc ( p k , poly 1 ( t j b ) ) , [ t j ] ] 2 = Enc ( p k , poly 2 ( t j b ) ) .
For k = 1 , , m + 1 , the model holder generates path vector P k multiplied by a random number
P k = r k · P k ,
and the following classification result vector is generated:
V k : = r k · P k + [ score ( L k ) , 0 , , 0 ] ,
where score ( L k ) C , r k , r k Z p * . As described in Section 3.2, the model holder generates S path matrices P 1 , , P S and S classification result matrices V 1 , , V S . Note that the path vector and classification result vector corresponding to the same path should be in the same matrix row.
For s = 1 , , S , path matrix P s and classification result matrix V s are encrypted as follows:
[ [ P s ] ] m a t = Enc ( p k , poly m a t ( P s ) ) , [ V s ] ] m a t = Enc ( p k , poly m a t ( V s ) ) ,
where Equation (16) is used to compute poly m a t . A pair of ciphertexts of the elements of a threshold and its comparative feature vector
( [ [ X λ j ] ] 1 , [ [ X λ j ] ] 2 ) , ( [ [ t j ] ] 1 , [ [ t j ] ] 2 ) ,
and ciphertext pairs of the path matrices and classification result matrices
[ [ P s ] ] m a t , [ [ V s ] ] m a t
are sent to the server for j = 1 , , m and s = 1 , , S , respectively.
Step 4 (server): 
The server calculates [ [ d j ] ] in Equation (10) as a = t j , b = X λ j . The server masks [ [ d j ] ] in encrypted form using random polynomial γ j Z p and then sends [ [ d j ] ] to the client.
Step 5 (client): 
The client decrypts [ [ d j ] ] and obtains d j = Dec ( s k , [ [ d j ] ] ) , which is then returned to the server.
Step 6 (server): 
The server obtains d j from Equation (13) and the comparison result vector B = [ 1 , b 1 , , b m ] . If any of the i μ -th ( i = 0 , , μ 1 ) coefficients in d j is zero, then b j = 1 ; otherwise, b j = 0 .
Step 7 (server): 
The server packs the comparison result vector B using Equation () to obtain poly t ( B ) . The server calculates
[ [ P ˜ s ] ] = [ [ P s ] ] m a t poly t ( B ) , [ V ˜ s ] ] = [ [ V s ] ] m a t poly t ( B )
for s = 1 , , S and sends ( [ [ P ˜ s ] ] , [ [ V ˜ s ] ] ) to the client.
Step 8 (client): 
The client obtains the number of path matrix S = ( m + 1 ) / m using Equation (21), where m = n / ( m + 1 ) . Then, for  s = 1 , , S , it decrypts [ [ P ˜ s ] ] to obtain a polynomial with randomized non-zero coefficients for the path matrix P ˜ s , in which coefficients of x ( k 1 ) ( m + 1 ) are k-th path cost pc k · r k according to Equations (19) and (20). Since pc k · r k = 0 pc k = 0 , then it is verified whether any ( k 1 ) ( m + 1 ) -th term for k = 1 , , m is 0, which implies the corresponding path cost pc k = 0 . If so, the corresponding leaf of T is the classification result according to Equation (25), and the client decrypts [ [ V ˜ s ] ] and obtains C o u t from the coefficient of x ( k 1 ) ( m + 1 ) .
Algorithm 1 Efficient PPDT Inference via Homomorphic Matrix Multiplication
Input
X = ( X 1 , , X N ) : X i denotes the i-th feature vector of data X with N features, and T = ( t 1 , , t m ) : t j denotes the j-th threshold of decision tree T with m decision nodes.
n: degree of polynomial, S: number of path matrices, μ : maximum bit length of X i and t j .
Output: 
Classification result C o u t
1:
Client generates a secret–public key pair ( s k , p k ) and sends p k to the model holder and server.
Client encrypts each feature vector of data X (refer to Equation (22))
2:
for  i = 1 to N do
3:
    [ [ X i ] ] 1 Enc ( p k , poly 1 ( x i b ) ) , [ [ X i ] ] 2 Enc ( p k , poly 2 ( x i b ) ) .
4:
end for
5:
return  [ [ X ] ] = { ( [ [ X i ] ] 1 , [ [ X i ] ] 2 ) } i = 1 N , sends [ [ X ] ] to Model holder.
Model holder encrypts threshold t j (refer to Equation (23))
6:
for  j = 1 to m do
7:
    [ [ t j ] ] 1 Enc ( p k , poly 1 ( t j b ) ) , [ [ t j ] ] 2 Enc ( p k , poly 2 ( t j b ) ) .
8:
end for
9:
return  [ [ t ] ] = { [ [ t j ] ] 1 , [ [ t j ] ] 2 }
Model holder generates path matrix P and classification result matrix V (refer to Equations (24) and (25))
10:
for  k = 1 to m + 1  do
11:
   generate P k , select r k , r k Z p * ,
   compute P k r k · P k , V k : = r k · P k + [ score ( L k ) , 0 , , 0 ] .
12:
end for
12:
return  P , V is divided into S matrices P 1 , , P S and V 1 , , V S , as shown in Equation (21).
The Model Holder encrypts path matrices and classification result matrices (refer to Equations (16) and (26))
14:
for  s = 1 to S do
15:
    [ [ P s ] ] m a t Enc ( p k , poly m a t ( P s ) ) , [ [ V s ] ] m a t Enc ( p k , poly m a t ( V s ) ) .
16:
end for
17:
return  [ [ P ] ] = { [ [ P s ] ] } , [ [ V ] ] = { [ [ V s ] ] } , and sends [ [ X ] ] , [ [ t ] ] , [ [ P ] ] , [ [ V ] ] to Cloud Server.
Server calculates d via one-round communication with the Client (refer to Equation (10))
18:
for  j = 1 to S do
19:
   computes [ [ d j ] ] ( [ [ t j ] ] 1 [ [ X λ ] ] 1 ) ( [ [ t j ] ] 2 [ [ X λ ] ] 2 poly 1 3 ) poly ˜ ( 1 ) .
20:
   selects γ j Z p , computes [ [ d j ] ] [ [ d j ] ] γ j , and sends [ [ d j ] ] to Client.
21:
end for
22:
return  [ [ d ] ] = { [ [ d j ] ] } , and sends [ [ d ] ] to Client.
23:
Client decrypts d Dec ( [ [ d ] ] , s k ) and sends it to Server.
Server computes the comparison result vector B (refer to Equation (13))
24:
for  j = 1 to m do
25:
   for  i = 0 to μ 1  do
26:
     let b j = 0
27:
     ifi-th coefficient in d j = 0 then
28:
         b j b j + 1
29:
     end if
30:
   end for
31:
end for
32:
return  B = [ 1 , b 1 , , b m ]
Server computes encrypted path and classification result matrices (refer to Equations () and (29))
33:
for  s = 1 to S do
34:
    [ [ P ˜ s ] ] [ [ P s ] ] m a t poly t ( B ) , [ [ V ˜ s ] ] [ [ V s ] ] m a t poly t ( B ) .
35:
end for
36:
return  ( [ [ P ˜ s ] ] , [ [ V ˜ s ] ] ) and sends to Client.
Client decrypts and obtains the classification result (refer to Equations (19), (20) and (25))
37:
The default class is C o u t =
38:
for  s = 1 to S do
39:
    P ˜ s Dec ( [ [ P ˜ s ] ] , s k )
40:
   for  k = 1 to m  do
41:
     if ∃( ( k 1 ) ( m + 1 ) -th coefficients in P s = 0) then
42:
        computes V ˜ s Dec ( [ [ V ˜ s ] ] , s k ) .
43:
         C o u t the coefficient of x ( k 1 ) ( m + 1 )
44:
     end if
45:
   end for
46:
end for
47:
return  C o u t

3.4. Leaf Node Pruning (LNP)

In this subsection, we describe how to reduce the number of path costs to be calculated. Let the number of leaf nodes of decision tree T be m + 1 and the number of classes to be classified be K = | C | . In this case, we do not calculate the path cost for one class (e.g., C 0 ) but only the path cost for the other K 1 items. We compute the path cost corresponding to K 1 classes, and if any of them is 0, the output is the class corresponding to the path. If none of them is 0, the output is class C 0 corresponding to the uncalculated path. The detailed procedure is illustrated in Algorithm 3.
Algorithm 2 Efficiency-Enhanced PPDT with LNP (Algorithm 1+LNP)
Input: 
X = ( X 1 , , X N ) : X i denotes the i-th feature vector of data X with N features, and T = ( t 1 , , t m ) : t j denotes the j-th threshold of decision tree T with m nodes.
n: degree of polynomial, S: number of path matrices, μ : maximum bit length of X i and t j
K = | C | : number of classes C = { C 0 , C 1 , , C K 1 }
Output: 
Classification result C o u t
1:
Given path matrix P and classification result matrix V for T with m + 1 leaf nodes.
Model holder prunes leaf nodes that result to the default class
⇒ this process (Lines 2–9 bellow) being inserted between Line 12 and Line 13 of Algorithm 1
2:
for  i = 1 to m + 1  do
3:
   checks classification value score on leaf node L i of T ,
4:
   if ( score ( L i ) = C 0 ) then
5:
     deletes P i and V i from P and V, respectively.
6:
      P P P i , V V V i
7:
   end if
8:
end for
9:
return  P , V
10:
Given encrypted path matrix [ [ P ˜ s ] ] and classification result matrix [ [ V ˜ s ] ] (Line 36 of Algorithm 1)
Client decrypts and obtains classification result (refer to Equations (19), (20) and (25))
⇒ this process (Lines 11–21 bellow) replaces Lines 37–47 of Algorithm 1
11:
Default class is C o u t = C 0
12:
for  s = 1 to S do
13:
    P ˜ s Dec ( [ [ P ˜ s ] ] , s k )
14:
   for  k = 1 to m  do
15:
     if ∃( ( k 1 ) ( m + 1 ) -th coefficients in P s = 0) then
16:
        computes V ˜ s Dec ( [ [ V ˜ s ] ] , s k ) .
17:
         C o u t the coefficient of x ( k 1 ) ( m + 1 )
18:
     end if
19:
   end for
20:
end for
21:
return  C o u t
Note that the default class C 0 must be securely shared in advance between the client and the model holder. Assigning the majority class as the default is generally more effective. The selection of the class excluded from computation depends on the classification task, as outlined below:
Binary classification tasks 
(e.g., disease detection, abnormality detection): Only the decision path for class 1 (positive or anomalous) is evaluated. If the result is 0, class 1 is returned; otherwise, class 0 (negative or normal) is output.
Multi-class classification tasks: 
Only K 1 classes are evaluated. If the dataset exhibits class imbalance (e.g., ImageNet), the majority class is assigned as the default. In the absence of such imbalance (e.g., MNIST), one class is randomly selected to serve as the default (i.e., the class not evaluated).
We next use a simple example to illustrate homomorphic matrix multiplication and the leaf node pruning process based on Figure 1 to enhance clarity and aid in reader understanding.
Example 1.
Figure 1 shows a decision tree example that is for three classes C = { C 0 , C 1 , C 2 } , in which the majority class C 0 is assigned as the default. Feature vector X : = ( X 1 , X 2 , X 3 ) , where X 1 , X 2 and X 3 denote height, weight, and age, respectively. For example, client Bob, whose feature vector X = ( X 1 , X 2 , X 3 ) = ( 150 , 64 , 32 ) , tries to use the service to inference his health status. Decision tree T has m = 3 nodes with threshold ( t 1 , t 2 , t 3 ) = ( 170 , 60 , 25 ) . In this case, the comparison result vector can be expressed as B = ( 1 , b 1 , b 2 , b 3 ) .
From Figure 1, it can be easily seen that Bob’s data should be classified to leaf node L 2 by Path 2 . In the following, we show how to perform classification inference using client Bob’s data and the model holder’s decision tree model while keeping them encrypted. According to Equations (2) and (19), we have
pc 1 = P 1 , B = [ 1 , 1 , 0 , 0 ] , [ 1 , b 1 , b 2 , b 3 ] , pc 2 = P 2 , B = [ 2 , 1 , 1 , 1 ] , [ 1 , b 1 , b 2 , b 3 ] , pc 3 = P 3 , B = [ 1 , 1 , 1 , 1 ] , [ 1 , b 1 , b 2 , b 3 ] , pc 4 = P 4 , B = [ 0 , 1 , 1 , 0 ] , [ 1 , b 1 , b 2 , b 3 ] .
Therefore,
P = P 1 P 2 P 3 P 4 = 1 1 0 0 2 1 1 1 1 1 1 1 0 1 1 0 .
LNP process:  Pruning the leaf nodes with class C 0 , i.e., L 1 and L 3 , while multiplying P k by a random non-zero numbers r k , r k Z p * ( k = 2 , 4 ) , we obtain the path matrix
P = r 2 P 2 r 4 P 4 = P 1 P 2 = 2 r 2 r 2 r 2 r 2 0 r 4 r 4 0 ,
and the corresponding classification result matrix
V = r 2 P 2 + [ C 1 , 0 , 0 , 0 ] r 4 P 4 + [ C 2 , 0 , 0 , 0 ] = 2 r 2 + C 1 r 2 r 2 r 2 C 2 r 4 r 4 0 .
Instead of using the original matrices P and V, we only need to use the pruned matrices P and V for calculation, thus saving computational cost, as shown in Figure 4.
The following two processes are run on encrypted statuses.
Comparison on each node:  To check whether X i > t i at node D i , let μ denote the bit length of the two integers X i and t i , where i = 1 , 2 , 3 . For ease of explanation, we use X 3 , t 3 as an example, with a bit length of μ 3 = 6 .
  • Bob encrypts each feature of his own data using Equations (6) and (9).
    [ [ X ] ] : = { [ [ X i ] ] 1 , [ [ X i ] ] 2 } i = 1 3 = { Enc ( p k , poly 1 ( X i ) ) , Enc ( p k , poly 2 ( X i ) ) } i = 1 3 ,
    where  X 3 = 32 = (100000)2 in binary, poly 1 ( 32 ) = 1 , poly 2 ( 32 ) = x 6 + x 12 + x 18 + x 24 + x 30 .
  • Model holder encrypts each threshold of T using Equations (6) and (8).
    [ [ t ] ] : = { [ [ t i ] ] 1 , [ [ t i ] ] 2 } i = 1 3 = { Enc ( p k , poly 1 ( t i ) ) , Enc ( p k , poly 2 ( t i ) ) } i = 1 3 ,
    where t 3 = 25 = (11001)2 in binary, poly 1 ( 25 ) = x + x 2 + x 5 , poly 2 ( 25 ) = x 11 + x 16 + x 17 + x 22 + x 23 + x 28 + x 29 .
  • Server runs homomorphic operation to obtain polynomials of the comparison result and then adds a random polynomial using Equation (10) over a polynomial ring Z q , which implies x n = 1 mod q , as follows.
    [ [ d ] ] : = { [ [ d i ] ] } i = 1 3 = { Enc p k , ( poly 1 ( t i ) poly 1 ( X i ) ) ( poly 2 ( t i ) poly 2 ( X i ) + poly 1 3 ) + poly ( 1 ) ˜ } i = 1 3 , [ d ] ] : = { [ [ d i + γ ] ] } i = 1 3 ,
    where γ Z q .
  • Bob decrypts [ [ d ] ] and sends d back to the server. After removing the random polynomial γ from d , the server can obtain comparison results from d = { d i } i = 1 3 by checking whether ∃ any coefficient of x ( i 1 ) μ = 0 . If so X i > t i , set b i = 1 ; otherwise, b i = 0 .
    For example, d 3 = 3 x 6 + x 12 + 4 x 18 + 4 x 24 + 4 x 30 + other terms , which is the coefficient of x 0 = 0 , so b 3 = 1 . Similarly, b 1 = 0 , b 2 = 0 ; therefore, B = [ 1 , b 1 , b 2 , b 3 ] = [ 1 , 0 , 0 , 1 ] .
Classification inference:
  • The server runs a multiplication homomorphic operation for Bob using vector B and the following encrypted path matrix [ [ P ] ] and classification result matrix [ [ V ] ] based on Equations (14), (16) and (18)
    [ [ P ] ] = Enc poly m a t ( P ) = Enc 2 r 2 + r 2 x r 2 x 2 r 2 x 3 + 0 · x 4 + r 4 x 5 + r 4 x 6 + 0 · x 7 , [ V ] ] = Enc poly m a t ( V ) = Enc ( 2 r 2 + C 1 ) + r 2 x r 2 x 2 r 2 x 3 + C 2 x 4 + r 4 x 5 + r 4 x 6
    to obtain
    Enc poly m a t ( P ) poly t ( B ) = Enc ( 2 r 2 x 0 + r 2 x r 2 x 2 r 2 x 3 + r 4 x 5 + r 4 x 6 ) ( 1 x n 3 x n 2 ) = Enc 0 · x 0 + r 4 x 4 + other terms = Enc F path ( x ) , Enc poly m a t ( V ) poly t ( B ) = Enc ( ( 2 r 2 + C 1 ) + r 2 x r 2 x 2 r 2 x 3 + C 2 x 4 + r 4 x 5 + r 4 x 6 ) ( 1 x n 3 x n 2 ) = Enc C 1 x 0 + ( r 4 + C 2 ) x 4 + other terms = Enc F class ( x ) .
  • Bob decrypts and obtains the polynomial corresponding to a ( k , ) -dimension path matrix F path ( x ) and then checks its coefficients of x ( i 1 ) ( i = 1 , , k ) terms. In this example, k = 2 , = 4 , so whether the coefficient of x 0 or x 4 is zero is checked.
    In fact,
    F path ( x ) = P 1 , B + P 2 , B · x 4 + other terms ,
    F class ( x ) = ( P 1 , B + C 1 ) + ( P 2 , B + C 2 ) · x 4 + other terms ,
    and since P 1 , B = r 2 P 2 , B , P 2 , B = r 4 P 4 , B ,
    • the coefficient of x 0 is zero pc 2 = P 2 , B = 0 , and
    • the coefficient of x 4 is zero pc 4 = P 4 , B = 0 ,
    which links to leaf nodes L 2 and L 4 , respectively.
    As shown in Equation (38), since the coefficient of x 0 in the first polynomial is zero, if and only if pc 2 = 0 , which means X is classified to leaf node L 2 . Then, the coefficient of x 0 in the second polynomial is the classification result; that is, C o u t = C 1 .

3.5. Complexity

The client encrypts each element of the feature vectors with two different packing methods for comparison protocols and sends them to the model holder. The model holder encrypts the threshold for each decision node for 2 m times. In addition, the path and classification result matrices are encrypted S = ( m + 1 ) / m ( m = n / ( m + 1 ) ) times, respectively. The model holder sends the pairs of threshold and feature vector elements and the pairs of path and classification result matrices to the server. Next, the server and client cooperate to perform the comparison protocol for m times. In this case, the client performs decryption m times. The server performs homomorphic multiplication of the path and classification result matrices 2 S times and sends the client results. The client obtains the classification result by decrypting the received path and classification result matrices 2 S times.

3.6. Security Analysis

For a homomorphic encryption scheme, indistinguishability under chosen-plaintext attack (IND-CPA) is the basic security requirement. The SHE scheme used in the experiments in Section 4 is proven IND-CPA secure under the Ring-LWE assumption. In our proposed approach, as the input of the homomorphic computation consists of ciphertexts, the corresponding IND-CPA-secure SHE schemes will not be weakened.
Consider the entities involved in the protocol one by one (see Figure 3) and assume that they are all honest but curious and that they do not collude with one another.
  • Client. The client can decrypt the ciphertext sent from the cloud server in Steps 5 and 7 using the secret key and obtain d related to the comparison result and the decision tree structure. However, in Step 4, the noise from the cloud server is added for randomization. Therefore, the client receives a polynomial with completely random coefficients in d . Similarly, P ˜ s is also a polynomial with random coefficients except zero, which leads to the classification C o u t C ; that is, the coefficient corresponding to V ˜ s . Thus, the client can obtain the classification C o u t without knowing the decision tree model T
  • Model holder. The model holder only obtains the ciphertext of data X in Step 2, and IND-CPA security guarantees the privacy and security of the data.
  • Cloud server. The server honestly performs homomorphic operations for the system but is curious about the data provided by the client and the model holder. In Step 3, it receives only encrypted inputs using an SHE scheme that satisfies IND-CPA security, ensuring the confidentiality of both user data and model parameters. Even if the server obtains the comparison result vector B = [ 1 , b 1 , , b m ] from d sent in Step 5, it cannot recover the original data X nor infer meaningful information about the decision tree T (threshold t, path matrix P, and classification result V). This is due not only to encryption but also to randomization of P using noise terms r k and r k in Step 3, which prevents linkage b i ( i = 1 , , m ) to specific nodes of T .
Beyond technical guarantees, our approach also addresses ethical challenges inherent to PPML in cloud environments. Specifically, it supports secure inference while protecting sensitive user data, ensuring model confidentiality and the reducing risks of unauthorized access or data misuse. These safeguards are especially crucial in sensitive domains such as healthcare and finance, where balancing performance with privacy and intellectual property protection is key to responsible AI deployment.

4. Experiments

4.1. Experimental Setup

We implemented the proposed PPDT protocol in C++ where the BFV method [21] in SEAL 3.3 [22] was used as the encryption library. In our implementation, the two comparison protocols were adopted: Wang et al.’s efficiency-enhanced scheme (Protocol-1 in [17]) and Lu et al.’s non-interactive private comparison (XCMP) [15]. The path cost calculation in Section 3.2 was implemented.
Parameter Choice. First, to ensure that the above comparison methods can accurately decrypt the comparison results, the parameters n , q , p , and σ of the two methods must respectively satisfy the following two inequality requirements. Second, they must be selected to meet the 128-bit security level. The following parameters were used to ensure that the proposed model worked accurately and had 128-bit security, which is the currently recommended security level in [23]:
  • Efficiency-enhanced scheme [17]
    n = 2048 , log 2 q = 54 , p = 40 , 961 , σ = 3.2 .
    These satisfy the following inequality: q > 8 p 2 σ 4 n 2 + 4 p σ n 2 / 3 .
  • XCMP [15]
    n = 8192, log 2 q = 110 , p = 8191, σ = 3.2 .
    These satisfy the following inequality: q > p 3 σ 2 n ( σ n + 1 ) 2 + 2 p 2 σ n ( σ n + 1 ) .
Note that σ = 3.2 is the default noise standard deviation used in Microsoft SEAL. The inequalities involving n , q , p , and σ ensure the correctness of the corresponding comparison schemes. To support a maximum input bit length of , we chose n to satisfy the conditions from Wang et al. [17] (i.e., 2 < n ) and Lu et al. [15] (i.e., < log 2 n ). Based on these constraints, the comparison protocols could handle 32-bit and 13-bit integers for Wang et al.’s and Lu et al.’s schemes, respectively. This implies that the efficiency-enhanced scheme [17] supports comparisons of larger integers than does XCMP [15], even with a relatively smaller n. Given the selected n, we chose q to ensure a 128-bit security level as recommended in [23] and then determined p such that it satisfied the necessary inequalities while remaining efficient to compute.
The experiments were conducted on a standard PC with the Core i7-7700K (4.20 GHz) processor using a single thread mode. We used scikit-learn, a standard open-source machine learning library in Python 3.7.2, to train the decision trees. In both the BFV method and the comparison protocol by Lu et al. [15], an input value must be represented as an integer a Z n . At every an appropriate number of times when the multiplication is performed, the fractional part of a value must be truncated, and it should be normalized within the range of [ 0 , 2 13 ) .
In the performance evaluations, the computational time was measured by averaging over 10 trials for the following 8 data sets from the UCI machine learning repository [20]: Heart Disease(HD), Spambase (SP) (which were used for the benchmark in [15]), Breast Cancer Wisconsin (BC), Nursery (NS), Credit-Screening (CS), Shuttle (ST), EGG EYE State (EGG), and Bank Marketing (BK). The data set information and the parameters of the decision tree used are listed in Table 1.

4.2. Performance Comparisons

4.2.1. Path Cost Calculation

We compared the computational time to obtain the path costs under the three-party protocol when the proposed matrix multiplication in Section 3.2 and the Yasuda et al.’s homomorphic inner product calculation [19] were used. Table 2 shows the computational time for the path costs. We also show the computation time in parentheses when LNP was introduced into the proposed path cost calculation.
In calculating path costs by using the naive inner product, m + 1 homomorphic multiplications are required. Therefore, as the number of nodes in a tree m increases, the computation time for path costs also increases. On the other hand, when calculating the path cost by matrix multiplication, for the decision tree of m 35 of Table 2, the path cost computation time was the same. The reason is that for parameter n, the entire path cost can be calculated with one matrix in the case of m 35 . Therefore, for datasets other than BK, more than one path cost can be computed with one matrix. The computation time for the path cost was reduced by more than 90% compared to the inner product case. However, in the case of BK, only n / ( m + 1 ) = 2048 / 1028 = 1 path cost could be calculated with a single matrix multiplication. Hence, there is no difference in the computation time for matrix multiplication and inner product when n / ( m + 1 ) = 1 .
With LNP being applied, the path cost computation time remained unchanged for decision trees (e.g., BC, ST, CS, HD) with m 35 , as all path costs could be computed within a single matrix. However, for larger trees (e.g., NS, SP, EGG, BK), LNP significantly reduced the computation time—by 26% for NS, 50% for SP, 48% for EGG, and 47% for BK. The effect correlated with the number of classes: datasets with fewer classes (SP, EGG, BK each with 2) benefited more than did NS (5 classes), as fewer classes enhance pruning efficiency. See Table 1 and Figure 5 for dataset-specific details.

4.2.2. Protocol Efficiency

In this experiment, we used the XCMP of Lu et al. [15] for comparisons. We implemented two protocols and measured their computation times.
  • To evaluate the efficiency of different homomorphic encryption approaches, we also integrated the XCMP scheme into the comparison component of the three-party protocol described in Section 3.3. The path cost computation was performed following the procedure outlined in Steps 6–8 of Section 3.3.
  • A naive two-party protocol using XCMP by Lu et al. computed the result of two-party comparison (see Appendix B). Then, we computed the path cost under encrypt conditions by additive homomorphism.
Table 3 provides the measurement results.
In the naive two-party protocol, the path cost calculation is performed only by homomorphic addition, so O ( m 2 ) addition is required. Therefore, the path cost calculation accounted more than 80% of the total time. In contrast, when applied to the proposed protocol, the path cost calculation time was reduced to 0.5% of the total time. As a result, the time reduction is achieved by applying XCMP to the proposed protocol on the experiments’ dataset. In particular, the reduction rate was higher for the EGG and BK datasets with a large number of decision nodes.

4.3. Time Complexity

To verify the computational complexity in Section 3.5, we measured the computation time of the proposed protocol for a decision tree constructed with the number of m decision nodes fixed between [10, 1000] using the BK dataset.
Figure 6 provides the experimental results that demonstrated that the computation time was linear with the number of decision nodes. There were several points at which there was a rapid increase in the number of times the model holder’s encryption was performed, the number of times the client was compounded, and the number of times the server’s path cost was computed. As illustrated in Section 3.5, these computation times depend on S = ( m + 1 ) / m , where m = n / ( m + 1 ) . Thus, as m increases, the computation time increases rapidly.

5. Conclusions

In this paper, we proposed a privacy-preserving decision tree prediction protocol that enables secure outsourcing using homomorphic encryption. The proposed method uses the comparison protocol proposed by Wang et al. [17] or Lu et al. [15] to encrypt both feature vectors of a data holder and decision tree parameters of a model holder, and the structure of a decision tree model is hidden against both data holder and outsourced servers by calculating the path cost using the homomorphic matrix multiplication proposed by Dung et al. [18].
Our experimental results demonstrate that the computation time for three-party protocol using XCMP [15] is drastically reduced by more than 85% compared to the naive two-party protocol without a reduction in the prediction accuracy. In addition, we compared the method using homomorphic matrix multiplication [18] with homomorphic inner product [19]. As a result, we confirmed that there were cases where the computation time was reduced by more than 90%. We also proposed a leaf node pruning (LNP) algorithm to accelerate the three-party prediction protocol. The computation time was reduced by up to 50% when the proposed LNP was applied.
However, the proposed method has certain limitations. In particular, while the comparison results at each decision node are revealed to the outsourced server, the server cannot determine which specific feature each node corresponds to. This contrasts with the protocol by Lu et al. [15], which prevents such leakage but incurs significantly higher computational costs. This highlights a fundamental trade-off between security and computational efficiency: our approach enhances scalability and performance at the cost of limited information leakage. In many practical scenarios, this trade-off may be acceptable; however, minimizing data exposure remains a critical objective. In future work, we will aim to design a more efficient protocol that preserves performance while eliminating any information leakage, thereby strengthening both privacy and practicality.

Author Contributions

Conceptualization, S.F., L.W. and S.O.; methodology, S.F., L.W. and S.O.; software, S.F. and S.O.; validation, S.F. and S.O.; formal analysis, S.F. and L.W.; investigation, S.F. and L.W.; resources, S.O.; data curation, S.F.; writing—original draft preparation, S.F., L.W. and S.O.; writing—review and editing, L.W.; supervision, S.O.; project administration, S.O. and L.W.; funding acquisition, S.O. and L.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by JST CREST (grant number JPMJCR21M1) and JST AIP accelerated program (grant number JPMJCR22U5), Japan.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.

Acknowledgments

We would like to thank Takuya Hayashi for the useful discussion.

Conflicts of Interest

The authors declare no conflicts of interest.

Notations

The following notations are used in this manuscript:
1 { · } Functions that return 1 if · is true and 0 if false
μ Maximum bit length of an integer
A = [ a 0 , a 1 ] Vector of length
A d = [ a 0 , , a d 1 , 0 , , 0 ] d-bit subvector of A
a b = [ a 0 , , a μ 1 ] μ -bit binary vector of integer a,
where a μ 1 is the least significant bit of integer a.
NDimension of the feature vector
X = [ X 0 , , X N 1 ]              Feature vector
D i Decision node
L j Leaf node
λ i Index of feature vectors to be compared by decision node D i
t i Threshold of decision node D i
c j Class to be output by leaf node L j
mNumber of decision nodes in the decision tree
m + 1 Number of leaf nodes in the decision tree
Parameters being used in the ring-LWE-based encryption schemes:
nAn integer of power of 2 that denotes the degree of polynomial x n + 1 . Define a polynomial ring Z : = Z [ x ] / ( x n + 1 ) ,
qAn integer composed of q = q 1 × × q k ( q i is a prime number). Define a polynomial ring representing a ciphertext space Z q : = Z q [ x ] / ( x n + 1 ) ,
pDefine a plaintext space Z p : = Z p [ x ] / ( x n + 1 ) , where p and q are mutually prime natural numbers with the relation p < q ,
σ Standard deviation of the discrete Gaussian distribution defining the secret key space Z ( 0 , σ 2 ) . The elements of Z ( 0 , σ 2 ) are polynomials on the ring Z . Each coefficient is independently sampled from the discrete Gaussian distribution of the variance σ 2 .

Appendix A. Ring-LWE-Based Homomorphic Encryption

This study used the ring-LWE-based public key homomorphic encryption library called the Simple Encrypted Arithmetic Library (SEAL) v.3.3 [22] to implement our protocols. SEAL implements the somewhat homomorphic encryption scheme proposed by Fan et al. [21]. Due to the additive and multiplicative homomorphism, packing plaintexts provides an efficient homomorphic inner product and matrix multiplication calculations. See [21,22] for details on the encryption scheme.
The somewhat homomorphic encryption scheme consists of the following four basic algorithms:
  • ParamGen ( 1 λ ) : input security parameter 1 λ and output system parameter p p = ( n , q , p , σ ) .
  • KeyGen : input system parameter p p and output public key p k and secret key s k .
  • Enc ( p k , · ) : input plaintext m and output ciphertext c.
  • Dec ( s k , · ) : input ciphertext c and output plaintext m.
Homomorphic addition and multiplication algorithms are defined by Add and Mul , and the corresponding decryption algorithms are represented by DecA and DecM , respectively. Let c = Enc ( p k , m ) and c = Enc ( p k , m ) denote the ciphertexts of the two plaintexts m and m , respectively. Then, the sum and product of m and m can be calculated as follows.
Add ( c , c ) = c a d d Z q , DecA ( s k , c a d d ) = m + m Z p ; Mul ( c , c ) = c m u l Z q , DecM ( s k , c m u l ) = m m Z p .
Hereafter, we write Enc ( p k , · ) : = [ [ · ] ] , Add ( c , c ) : = c c , and Mul ( c , c ) = c c . The difference between c and c can be obtained using homomorphic addition, and we define Sub ( c , c ) : = c c = Add ( c , c ) .

Appendix B. XCMP [15]

Assume that client and server have two -bit integer values such as a and b, respectively. According to the algorithm in Figure A1, the server computes [ [ C ] ] to obtain the comparison result in encrypted form. In this scheme, the comparison result is a constant term in the polynomial C, as shown in C = 1 { a > b } + other terms of degree . It is possible to perform scalar multiples, additions, and subtractions on the resulting ciphertext [ [ C ] ] .
Figure A1. XCMP [15].
Figure A1. XCMP [15].
Applsci 15 05560 g0a1

References

  1. Konečný, J.; McMahan, H.B.; Ramage, D.; Richtárik, P. Federated Optimization: Distributed Machine Learning for On-Device Intelligence. arXiv 2016, arXiv:1610.02527. [Google Scholar] [CrossRef]
  2. Fredrikson, M.; Jha, S.; Ristenpart, T. Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, Denver, CO, USA, 12–16 October 2015; Ray, I., Li, N., Kruegel, C., Eds.; ACM: New York, NY, USA, 2015; pp. 1322–1333. [Google Scholar] [CrossRef]
  3. Rivest, R.L.; Dertouzos, M.L. On Data Banks and Privacy Homomorphisms. In Foundations of Secure Computation; DeMillo, R., Ed.; Academic Press: Cambridge, MA, USA, 1978; Volume 4, pp. 169–180. [Google Scholar]
  4. Gentry, C. A Fully Homomorphic Encryption Scheme. Ph.D. Thesis, Stanford University, Stanford, CA, USA, 2009. [Google Scholar]
  5. Akavia, A.; Leibovich, M.; Resheff, Y.S.; Ron, R.; Shahar, M.; Vald, M. Privacy-Preserving Decision Trees Training and Prediction. In Proceedings of the Machine Learning and Knowledge Discovery in Databases—European Conference, ECML PKDD 2020, Ghent, Belgium, 14–18 September 2020; Proceedings, Part I. Hutter, F., Kersting, K., Lijffijt, J., Valera, I., Eds.; Lecture Notes in Computer Science. Springer: Berlin/Heidelberg, Germany, 2020; Volume 12457, pp. 145–161. [Google Scholar] [CrossRef]
  6. Fréry, J.; Stoian, A.; Bredehoft, R.; Montero, L.; Kherfallah, C.; Chevallier-Mames, B.; Meyre, A. Privacy-Preserving Tree-Based Inference with TFHE. In Proceedings of the Mobile, Secure, and Programmable Networking—9th International Conference, MSPN 2023, Paris, France, 26–27 October 2023; Revised Selected Papers. Bouzefrane, S., Banerjee, S., Mourlin, F., Boumerdassi, S., Renault, É., Eds.; Lecture Notes in Computer Science. Springer: Berlin/Heidelberg, Germany, 2023; Volume 14482, pp. 139–156. [Google Scholar] [CrossRef]
  7. Hao, Y.; Qin, B.; Sun, Y. Privacy-Preserving Decision-Tree Evaluation with Low Complexity for Communication. Sensors 2023, 23, 2624. [Google Scholar] [CrossRef] [PubMed]
  8. Shin, H.; Choi, J.; Lee, D.; Kim, K.; Lee, Y. Fully Homomorphic Training and Inference on Binary Decision Tree and Random Forest. In Proceedings of the Computer Security—ESORICS 2024—29th European Symposium on Research in Computer Security, Bydgoszcz, Poland, 16–20 September 2024; Proceedings, Part III. García-Alfaro, J., Kozik, R., Choras, M., Katsikas, S.K., Eds.; Lecture Notes in Computer Science. Springer: Berlin/Heidelberg, Germany, 2024; Volume 14984, pp. 217–237. [Google Scholar] [CrossRef]
  9. Cong, K.; Das, D.; Park, J.; Pereira, H.V.L. SortingHat: Efficient Private Decision Tree Evaluation via Homomorphic Encryption and Transciphering. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, CCS 2022, Los Angeles, CA, USA, 7–11 November 2022; Yin, H., Stavrou, A., Cremers, C., Shi, E., Eds.; ACM: New York, NY, USA, 2022; pp. 563–577. [Google Scholar] [CrossRef]
  10. Tai, R.K.H.; Ma, J.P.K.; Zhao, Y.; Chow, S.S.M. Privacy-Preserving Decision Trees Evaluation via Linear Functions. In Proceedings of the Computer Security—ESORICS 2017—22nd European Symposium on Research in Computer Security, Oslo, Norway, 11–15 September 2017; Proceedings, Part II. Foley, S.N., Gollmann, D., Snekkenes, E., Eds.; Lecture Notes in Computer Science. Springer: Berlin/Heidelberg, Germany, 2017; Volume 10493, pp. 494–512. [Google Scholar] [CrossRef]
  11. Zheng, Y.; Duan, H.; Wang, C.; Wang, R.; Nepal, S. Securely and Efficiently Outsourcing Decision Tree Inference. IEEE Trans. Dependable Secur. Comput. 2022, 19, 1841–1855. [Google Scholar] [CrossRef]
  12. Wu, D.J.; Feng, T.; Naehrig, M.; Lauter, K.E. Privately Evaluating Decision Trees and Random Forests. Proc. Priv. Enhancing Technol. 2016, 2016, 335–355. [Google Scholar] [CrossRef]
  13. Maddock, S.; Cormode, G.; Wang, T.; Maple, C.; Jha, S. Federated Boosted Decision Trees with Differential Privacy. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, CCS 2022, Los Angeles, CA, USA, 7–11 November 2022; Yin, H., Stavrou, A., Cremers, C., Shi, E., Eds.; ACM: New York, NY, USA, 2022; pp. 2249–2263. [Google Scholar] [CrossRef]
  14. Damgård, I.; Geisler, M.; Krøigaard, M. A correction to ’efficient and secure comparison for on-line auctions’. Int. J. Appl. Cryptogr. 2009, 1, 323–324. [Google Scholar] [CrossRef]
  15. Lu, W.; Zhou, J.; Sakuma, J. Non-interactive and Output Expressive Private Comparison from Homomorphic Encryption. In Proceedings of the 2018 on Asia Conference on Computer and Communications Security, AsiaCCS 2018, Incheon, Republic of Korea, 4–8 June 2018; Kim, J., Ahn, G., Kim, S., Kim, Y., López, J., Kim, T., Eds.; ACM: New York, NY, USA, 2018; pp. 67–74. [Google Scholar] [CrossRef]
  16. Saha, T.K.; Koshiba, T. An Efficient Privacy-Preserving Comparison Protocol. In Proceedings of the Advances in Network-Based Information Systems, The 20th International Conference on Network-Based Information Systems, NBiS 2017, Ryerson University, Toronto, ON, Canada, 24–26 August 2017; Barolli, L., Enokido, T., Takizawa, M., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2018; Volume 7, pp. 553–565. [Google Scholar] [CrossRef]
  17. Wang, L.; Saha, T.K.; Aono, Y.; Koshiba, T.; Moriai, S. Enhanced Secure Comparison Schemes Using Homomorphic Encryption. In Proceedings of the Advances in Networked-Based Information Systems—The 23rd International Conference on Network-Based Information Systems, NBiS 2020, Victoria, BC, Canada, 31 August–2 September 2020; Barolli, L., Li, K.F., Enokido, T., Takizawa, M., Eds.; Advances in Intelligent Systems and Computing. Springer: Berlin/Heidelberg, Germany, 2021; Volume 1264, pp. 211–224. [Google Scholar] [CrossRef]
  18. Duong, D.H.; Mishra, P.K.; Yasuda, M. Efficient Secure Matrix Multiplication Over LWE-Based Homomorphic Encryption. Tatra Mt. Math. Publ. 2016, 67, 69–83. [Google Scholar] [CrossRef]
  19. Yasuda, M.; Shimoyama, T.; Kogure, J.; Yokoyama, K.; Koshiba, T. Practical Packing Method in Somewhat Homomorphic Encryption. In Proceedings of the Data Privacy Management and Autonomous Spontaneous Security–8th International Workshop, DPM 2013, and 6th International Workshop, SETOP 2013, Revised Selected Papers, Egham, UK, 12–13 September 2013; García-Alfaro, J., Lioudakis, G.V., Cuppens-Boulahia, N., Foley, S.N., Fitzgerald, W.M., Eds.; Lecture Notes in Computer Science. Springer: Berlin/Heidelberg, Germany, 2014; Volume 8247, pp. 34–50. [Google Scholar] [CrossRef]
  20. Kelly, M.; Longjohn, R.; Nottingham, K. The UCI Machine Learning Repository. Available online: https://archive.ics.uci.edu (accessed on 31 March 2025).
  21. Fan, J.; Vercauteren, F. Somewhat Practical Fully Homomorphic Encryption. IACR Cryptology ePrint Archive 2012. p. 144. Available online: https://eprint.iacr.org/2012/144 (accessed on 31 March 2025).
  22. Microsoft SEAL (Release 3.3); Microsoft Research: Redmond, WA, USA, 2019; Available online: https://github.com/Microsoft/SEAL (accessed on 31 March 2025).
  23. Albrecht, M.; Chase, M.; Chen, H.; Ding, J.; Goldwasser, S.; Gorbunov, S.; Halevi, S.; Hoffstein, J.; Laine, K.; Lauter, K.; et al. Homomorphic Encryption Security Standard; Technical report; HomomorphicEncryption.org: Toronto, ON, Canada, 2018. [Google Scholar]
  24. Fukui, S.; Wang, L.; Hayashi, T.; Ozawa, S. Privacy-Preserving Decision Tree Classification Using Ring-LWE-Based Homomorphic Encryption. In Proceedings of the Computer Sequrity Symposium 2019, Nagasaki, Japan, 21–24 October 2019; pp. 321–327. [Google Scholar]
Figure 1. Decision tree example.
Figure 1. Decision tree example.
Applsci 15 05560 g001
Figure 2. Our computation model.
Figure 2. Our computation model.
Applsci 15 05560 g002
Figure 3. Flowchart of the proposed protocol, where [ [ · ] ] denotes the ciphertext of “·”.
Figure 3. Flowchart of the proposed protocol, where [ [ · ] ] denotes the ciphertext of “·”.
Applsci 15 05560 g003
Figure 4. Image for LNP.
Figure 4. Image for LNP.
Applsci 15 05560 g004
Figure 5. Comparison of computation times with and without LNP across all datasets. For each dataset, the bar chart shows the path cost and total computation times, highlighting the reduction achieved by applying LNP. Datasets with fewer classes (e.g., SP, EGG, BK) showed more significant improvements due to more efficient pruning.
Figure 5. Comparison of computation times with and without LNP across all datasets. For each dataset, the bar chart shows the path cost and total computation times, highlighting the reduction achieved by applying LNP. Datasets with fewer classes (e.g., SP, EGG, BK) showed more significant improvements due to more efficient pruning.
Applsci 15 05560 g005
Figure 6. Computation time for decision nodes.
Figure 6. Computation time for decision nodes.
Applsci 15 05560 g006
Table 1. Data set information and the size of trained decision tree.
Table 1. Data set information and the size of trained decision tree.
DatasetDatasetModel
# Data # Attributes  N # Classes  C # Nodes  m
BC5693028
ST43,5009724
CS65315226
HD72013535
NS12,9608549
SP4601572110
EGG14,980142724
BK45,2111621027
# denotes the number of data, attributes, classes, and decision nodes in the table for brevity.
Table 2. Computation time for path cost calculation in two methods: matrix multiplication of Section 3.2 and the homomorphic inner product calculation of Yasuda et al. [19] that we proposed in [24]. The computation time for the matrix multiplication with the addition of LNP is provided in parentheses. The computation time for LNP was measured as an average of 10 trials for each dataset class.
Table 2. Computation time for path cost calculation in two methods: matrix multiplication of Section 3.2 and the homomorphic inner product calculation of Yasuda et al. [19] that we proposed in [24]. The computation time for the matrix multiplication with the addition of LNP is provided in parentheses. The computation time for LNP was measured as an average of 10 trials for each dataset class.
Dataset AccuracyPath Cost (ms)Total Time (ms)
Matrix (+LNP)Inner ProductMatrix (+LNP)Inner Product
BC96.4%0.25     (0.25)3.0444.19    (43.92)63.58
ST99.8%0.25     (0.25)6.7466.85    (66.83)117.51
CS90.8%0.25     (0.25)7.4777.57    (76.12)134.64
HD61.1%0.25     (0.25)9.7398.82    (98.11)174.34
NS98.6%0.50     (0.37)11.72130.87    (129.07)235.04
SP90.4%1.72     (0.86)13.13321.89    (302.72)490.04
EGG69.8%88.57   (46.20)179.313425.44  (2516.35)4442.00
BK88.6%247.63 (131.14)252.876485.30  (4394.58)6455.17
Table 3. Computation time with XCMP [15] in the following scenarios: three-party with matrix multiplication and LNP; two-party with homomorphic addition of XCMP results for computing the path cost.
Table 3. Computation time with XCMP [15] in the following scenarios: three-party with matrix multiplication and LNP; two-party with homomorphic addition of XCMP results for computing the path cost.
Dataset  Path Cost (ms)Total Time (ms)
Three-PartyTwo-PartyThree-PartyTwo-Party
BC2.171577.51302.191958.13
ST2.1711,745.70647.5112,724.20
CS2.1814,905.70736.9516,037.90
HD2.1827,041.40961.1928,298.90
NS2.1851,772.501321.4754,268.10
SP2.19255,659.002996.92263,008.00
EGG74.0110,947,300.0019,432.9011,013,300.00
BK166.7422,027,200.0028,821.6022,130,500.00
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fukui, S.; Wang, L.; Ozawa, S. Efficient and Privacy-Preserving Decision Tree Inference via Homomorphic Matrix Multiplication and Leaf Node Pruning. Appl. Sci. 2025, 15, 5560. https://doi.org/10.3390/app15105560

AMA Style

Fukui S, Wang L, Ozawa S. Efficient and Privacy-Preserving Decision Tree Inference via Homomorphic Matrix Multiplication and Leaf Node Pruning. Applied Sciences. 2025; 15(10):5560. https://doi.org/10.3390/app15105560

Chicago/Turabian Style

Fukui, Satoshi, Lihua Wang, and Seiichi Ozawa. 2025. "Efficient and Privacy-Preserving Decision Tree Inference via Homomorphic Matrix Multiplication and Leaf Node Pruning" Applied Sciences 15, no. 10: 5560. https://doi.org/10.3390/app15105560

APA Style

Fukui, S., Wang, L., & Ozawa, S. (2025). Efficient and Privacy-Preserving Decision Tree Inference via Homomorphic Matrix Multiplication and Leaf Node Pruning. Applied Sciences, 15(10), 5560. https://doi.org/10.3390/app15105560

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop