# Evaluating the Privacy and Utility of Time-Series Data Perturbation Algorithms

## Abstract

**:**

## 1. Introduction

- A systematic procedure for evaluating the utility and privacy of perturbation algorithms. The approach is sensor-type-independent, algorithm-independent, and data-independent.
- A methodology for comparing data perturbation methods.
- We demonstrate applicability by assessing the impact of data integrity attacks on perturbed data.
- We analyze the approach on actual driving data and build the dataset following the stated requirements.

## 2. Background and Related Work

#### 2.1. Time-Series Data Privacy Techniques

#### 2.2. Privacy and Utility Metrics for Data Perturbation

#### 2.3. Cyber Attack Impact Assessment (CAIA) Methodology

## 3. Proposed Approach

#### 3.1. Perturbation System Architecture and Design Consideration

- the type of data leaving the system and the potentially sensitive information they carry;
- the amount of information to be hidden considering possible sensitive information or other external factors;
- utility restrictions (how much information about the data should still be available after perturbation);
- the processing power of the equipment.

#### 3.2. Formal Description

**Definition**

**1.**

**regular behavior**if its standard deviation ${\sigma}_{r}$ is in the interval of regular operation, ${\sigma}_{r}\in [{\sigma}_{{r}_{min}},{\sigma}_{{r}_{max}}]$.

**Definition**

**2.**

**sensitive behavior**(

**user-specific**information) of the system user if the standard deviation of X, ${\sigma}_{s}$, is outside the regular operation interval, ${\sigma}_{s}\in [{\sigma}_{min},{\sigma}_{{r}_{min}})$ or ${\sigma}_{s}\in ({\sigma}_{{r}_{max}},{\sigma}_{max}]$.

**Definition**

**3.**

**intervention**is an intentional modification of the collected time-series data X, such that the standard deviation of X during the attack (${\sigma}_{a}$) is greater than the standard deviation of all collected sensitive behavior data, ${\sigma}_{a}>{\sigma}_{s}$ or it is smaller than the standard variation of all collected sensitive behavior data, ${\sigma}_{a}<{\sigma}_{s}$.

**impact**that behavior b has on the observed variable of the perturbation system, computed as the cross-covariance between the perturbed landmark regular behavior ${Y}_{0}=\mathcal{M}\left({X}_{0}\right)$ and the collected behavior data b:

**relative impact**$\mathcal{C}$ of a behavior b on the observed variable is defined as:

**mean relative impact**is used to quantify the impact of interventions under uncertainty:

**Definition**

**4.**

**$\alpha $-behavior-privacy**for the observed variable if it holds that:

**Definition**

**5.**

**$\alpha $-behavior-utility**for the observed variable if it holds that:

#### 3.3. Comparing Perturbation Methods

**Definition**

**6.**

- ${\mathcal{M}}^{1}>{\mathcal{M}}^{2}$: if ${\mathcal{M}}^{1}$ provides higher utility than ${\mathcal{M}}^{2}$;
- ${\mathcal{M}}^{1}\gg {\mathcal{M}}^{2}$: if ${\mathcal{M}}^{1}$ provides higher privacy than ${\mathcal{M}}^{2}$;
- ${\mathcal{M}}^{1}\u22d9{\mathcal{M}}^{2}$: if ${\mathcal{M}}^{1}$ provides higher utility than ${\mathcal{M}}^{2}$ and ${\mathcal{M}}^{1}$ provides higher privacy than ${\mathcal{M}}^{2}$.

**Proposition**

**1.**

**Proof.**

**Proposition**

**2.**

**Proof.**

**Proposition**

**3.**

**Proof.**

Algorithm 1: Comparison of Perturbation Methods |

#### 3.4. Evaluation of the Utility of a Perturbation Method in Case of Data Interventions

**Proposition**

**4.**

**Proof.**

Algorithm 2: Intervention Impact on Perturbed Data Algorithm |

## 4. Experimental Results

#### 4.1. Data Collection and Intervention Generation

**Step 1**: Collect several normal behavior time-series data, compute the standard deviation $\sigma $ for each one, and identify $[{\sigma}_{{r}_{min}},{\sigma}_{{r}_{max}}]$, the interval of minimum and maximum standard deviation possible for normal behavior.**Step 2**: Choose the landmark normal behavior (${X}_{0}$), the data further used for computing impact coefficients and for attack generation. For instance, choose the normal behavior that has the standard deviation closest to the middle of the $[{\sigma}_{{r}_{min}},{\sigma}_{{r}_{max}}]$ interval.**Step 3**: Identify possible sensitive behaviors and collect the corresponding data. The collected data qualifies as sensitive behavior if its standard deviation is outside the interval $[{\sigma}_{{r}_{min}},{\sigma}_{{r}_{max}}]$, according to Definition 2.

- Pulse attack: In this case the altered value ${X}_{j}^{*}\left(t\right)$ is obtained by dividing the value of the attribute j at time t, ${X}_{j}\left(t\right)$, by an attack parameter ${a}_{p}$: ${X}_{j}^{*}\left(t\right)={X}_{j}\left(t\right)/{a}_{p}$ with t in the attack interval $[{T}_{start},{T}_{stop}]$;
- Scaling attack: The value ${X}_{j}\left(t\right)$ is scaled by attack parameter ${a}_{p}$: ${X}_{j}^{*}\left(t\right)={a}_{p}\xb7{X}_{j}\left(t\right)$, for $t\in [{T}_{start},{T}_{stop}]$;
- Ramp attack: This type of attack adds values from a ramp function ${X}_{j}^{*}\left(t\right)={X}_{j}\left(t\right)+ramp\left(t\right)$, for $t\in [{T}_{start},{T}_{stop}]$, where $ramp\left(t\right)={a}_{p}\xb7t$;
- Random attack: Here, a random value selected from a uniform distribution interval $(-{a}_{p},{a}_{p})$ is added: ${X}_{j}^{*}\left(t\right)={X}_{j}\left(t\right)+random(-{a}_{p},{a}_{p})$, for $t\in [{T}_{start},{T}_{stop}]$;
- Step attack: This attack involves setting values to the attack parameter ${a}_{p}$ is added: ${X}_{j}^{*}\left(t\right)={a}_{p}$, for $t\in [{T}_{start},{T}_{stop}]$.

#### 4.2. Experiments

#### 4.2.1. Compare Perturbation Methods

#### 4.2.2. Evaluate the Utility of a Perturbation Module for Detecting Data Interventions/Attacks

## 5. Discussion

- Compared to the other mechanisms, the proposed approach measures both privacy and utility;
- Various distortion and perturbation methods can be compared, no matter how different they are;
- An evaluation of the impact of various data integrity attacks on perturbed data is possible.

## 6. Conclusions

## Author Contributions

## Funding

## Data Availability Statement

## Conflicts of Interest

## References

- Hallac, D.; Sharang, A.; Stahlmann, R.; Lamprecht, A.; Huber, M.; Roehder, M.; Leskovec, J. Driver identification using automobile sensor data from a single turn. In Proceedings of the 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), Rio de Janeiro, Brazil, 1–4 November 2016; IEEE: New York, NY, USA, 2016; pp. 953–958. [Google Scholar]
- Mekruksavanich, S.; Jitpattanakul, A. Biometric user identification based on human activity recognition using wearable sensors: An experiment using deep learning models. Electronics
**2021**, 10, 308. [Google Scholar] [CrossRef] - Lako, F.L.; Lajoie-Mazenc, P.; Laurent, M. Privacy-Preserving Publication of Time-Series Data in Smart Grid. Secur. Commun. Netw.
**2021**, 2021, 6643566. [Google Scholar] - Agrawal, R.; Srikant, R. Privacy-Preserving Data Mining. In Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data, Dallas, TX, USA, 15–18 May 2000; Association for Computing Machinery: New York, NY, USA, 2000; pp. 439–450. [Google Scholar] [CrossRef]
- Bingham, E.; Mannila, H. Random Projection in Dimensionality Reduction: Applications to Image and Text Data. In Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 26–29 August 2001; Association for Computing Machinery: New York, NY, USA, 2001; pp. 245–250. [Google Scholar] [CrossRef]
- Chen, K.; Liu, L. Privacy preserving data classification with rotation perturbation. In Proceedings of the Fifth IEEE International Conference on Data Mining (ICDM’05), Houston, TX, USA, 27–30 November 2005; p. 4. [Google Scholar] [CrossRef]
- Mukherjee, S.; Chen, Z.; Gangopadhyay, A. A privacy-preserving technique for Euclidean distance-based mining algorithms using Fourier-related transforms. VLDB J.
**2006**, 15, 293–315. [Google Scholar] [CrossRef] - Papadimitriou, S.; Li, F.; Kollios, G.; Yu, P.S. Time Series Compressibility and Privacy. In Proceedings of the 33rd International Conference on Very Large Data Bases—VLDB Endowment, Vienna, Austria, 23–27 September 2007; pp. 459–470. [Google Scholar]
- Rastogi, V.; Nath, S. Differentially Private Aggregation of Distributed Time-Series with Transformation and Encryption. In Proceedings of the 2010 ACM SIGMOD International Conference on Management of Data, Indianapolis, IN, USA, 6–10 June 2010; pp. 735–746. [Google Scholar]
- Lyu, L.; He, X.; Law, Y.W.; Palaniswami, M. Privacy-Preserving Collaborative Deep Learning with Application to Human Activity Recognition. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, Singapore, 6–10 November 2017; pp. 1219–1228. [Google Scholar] [CrossRef]
- Genge, B.; Kiss, I.; Haller, P. A system dynamics approach for assessing the impact of cyber attacks on critical infrastructures. Int. J. Crit. Infrastruct. Prot.
**2015**, 10, 3–17. [Google Scholar] [CrossRef] - Ford, D.N. A behavioral approach to feedback loop dominance analysis. Syst. Dyn. Rev. J. Syst. Dyn. Soc.
**1999**, 15, 3–36. [Google Scholar] [CrossRef] - Wang, H.; Xu, Z. CTS-DP: Publishing correlated time-series data via differential privacy. Knowl. Based Syst.
**2017**, 122, 167–179. [Google Scholar] [CrossRef] - Roman, A.S.; Genge, B.; Duka, A.V.; Haller, P. Privacy-Preserving Tampering Detection in Automotive Systems. Electronics
**2021**, 10, 3161. [Google Scholar] [CrossRef] - Hassan, M.U.; Rehmani, M.H.; Chen, J. Differential Privacy Techniques for Cyber Physical Systems: A Survey. IEEE Commun. Surv. Tutor.
**2020**, 22, 746–789. [Google Scholar] [CrossRef] [Green Version] - Liu, X.; Zheng, Y.; Yi, X.; Nepal, S. Privacy-preserving collaborative analytics on medical time series data. IEEE Trans. Dependable Secur. Comput.
**2020**, 19, 1687–1702. [Google Scholar] [CrossRef] - Katsomallos, M.; Tzompanaki, K.; Kotzinos, D. Privacy, space and time: A survey on privacy-preserving continuous data publishing. J. Spat. Inf. Sci.
**2019**, 2019, 57–103. [Google Scholar] [CrossRef] - Wang, T.; Zheng, Z.; Rehmani, M.H.; Yao, S.; Huo, Z. Privacy Preservation in Big Data From the Communication Perspective—A Survey. IEEE Commun. Surv. Tutor.
**2019**, 21, 753–778. [Google Scholar] [CrossRef] - Sweeney, L. k-Anonymity: A Model for Protecting Privacy. IEEE Secur. Priv.
**2002**, 10, 557–570. [Google Scholar] [CrossRef] [Green Version] - Machanavajjhala, A.; Gehrke, J.; Kifer, D.; Venkitasubramaniam, M. L-Diversity: Privacy Beyond k-Anonymity. Acm Trans. Knowl. Discov. Data
**2006**, 1, 24. [Google Scholar] - Li, N.; Li, T.; Venkatasubramanian, S. t-Closeness: Privacy Beyond k-Anonymity and l-Diversity. In Proceedings of the 2007 IEEE 23rd International Conference on Data Engineering, Istanbul, Turkey, 17–20 April 2007; Volume 2, pp. 106–115. [Google Scholar]
- Bhaduri, K.; Stefanski, M.D.; Srivastava, A.N. Privacy-Preserving Outlier Detection Through Random Nonlinear Data Distortion. IEEE Trans. Syst. Man Cybern. Part B
**2011**, 41, 260–272. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Dwork, C. Differential privacy: A survey of results. In Proceedings of the International Conference on Theory and Applications of Models of Computation, Xi’an, China, 25–29 April 2008; Springer: Berlin/Heidelberg, Germany, 2008; pp. 1–19. [Google Scholar]
- Dwork, C.; McSherry, F.; Nissim, K.; Smith, A. Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography Conference; Springer: Berlin/Heidelberg, Germany, 2006; pp. 265–284. [Google Scholar]
- Arcolezi, H.H.; Couchot, J.F.; Renaud, D.; Al Bouna, B.; Xiao, X. Differentially private multivariate time series forecasting of aggregated human mobility with deep learning: Input or gradient perturbation? Neural Comput. Appl.
**2022**, 34, 13355–13369. [Google Scholar] [CrossRef] - Wu, T.; Wang, X.; Qiao, S.; Xian, X.; Liu, Y.; Zhang, L. Small perturbations are enough: Adversarial attacks on time series prediction. Inf. Sci.
**2022**, 587, 794–812. [Google Scholar] [CrossRef] - Dwork, C.; Kohli, N.; Mulligan, D. Differential privacy in practice: Expose your epsilons! J. Priv. Confid.
**2019**, 9, 1–22. [Google Scholar] [CrossRef] [Green Version] - Yang, E.; Parvathy, V.S.; Selvi, P.P.; Shankar, K.; Seo, C.; Joshi, G.P.; Yi, O. Privacy Preservation in Edge Consumer Electronics by Combining Anomaly Detection with Dynamic Attribute-Based Re-Encryption. Mathematics
**2020**, 8, 1871. [Google Scholar] [CrossRef] - De Canditiis, D.; De Feis, I. Anomaly detection in multichannel data using sparse representation in RADWT frames. Mathematics
**2021**, 9, 1288. [Google Scholar] [CrossRef] - Bolboacă, R. Adaptive Ensemble Methods for Tampering Detection in Automotive Aftertreatment Systems. IEEE Access
**2022**, 10, 105497–105517. [Google Scholar] [CrossRef] - Geng, Q.; Viswanath, P. The optimal noise-adding mechanism in differential privacy. IEEE Trans. Inf. Theory
**2015**, 62, 925–951. [Google Scholar] [CrossRef] - Soria-Comas, J.; Domingo-Ferrer, J. Optimal data-independent noise for differential privacy. Inf. Sci.
**2013**, 250, 200–214. [Google Scholar] [CrossRef] - Xiao, X.; Bender, G.; Hay, M.; Gehrke, J. iReduct: Differential privacy with reduced relative errors. In Proceedings of the 2011 ACM SIGMOD International Conference on Management of Data, Athens, Greece, 12–16 June 2011; pp. 229–240. [Google Scholar]
- Yang, X.; Ren, X.; Lin, J.; Yu, W. On binary decomposition based privacy-preserving aggregation schemes in real-time monitoring systems. IEEE Trans. Parallel Distrib. Syst.
**2016**, 27, 2967–2983. [Google Scholar] [CrossRef] - Kellaris, G.; Papadopoulos, S. Practical differential privacy via grouping and smoothing. Proc. VLDB Endow.
**2013**, 6, 301–312. [Google Scholar] [CrossRef] [Green Version] - Acs, G.; Castelluccia, C.; Chen, R. Differentially private histogram publishing through lossy compression. In Proceedings of the 2012 IEEE 12th International Conference on Data Mining, Brussels, Belgium, 10–13 December 2012; IEEE: New Yok, NY, USA, 2012; pp. 1–10. [Google Scholar]
- Zhu, T.; Xiong, P.; Li, G.; Zhou, W. Correlated differential privacy: Hiding information in non-IID data set. IEEE Trans. Inf. Forensics Secur.
**2014**, 10, 229–242. [Google Scholar] - Agrawal, D.; Aggarwal, C.C. On the Design and Quantification of Privacy Preserving Data Mining Algorithms. In Proceedings of the 20th ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems, Santa Barbara, CA, USA, 21–23 May 2001; pp. 247–255. [Google Scholar]
- Evfimievski, A.; Srikant, R.; Agrawal, R.; Gehrke, J. Privacy preserving mining of association rules. In Proceedings of the 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Edmonton, AB, Canada, 23–26 July 2002; pp. 217–228. [Google Scholar]
- Huang, J.; Howley, E.; Duggan, J. The Ford Method: A sensitivity analysis approach. In Proceedings of the 27th International Conference of the System Dynamics Society, Albuquerque, NM, USA, 26–30 July 2009; The System Dynamics Society: Littleton, MA, USA, 2009. [Google Scholar]
- European Data Protection Board. Guidelines 1/2020 on Processing Personal Data in the Context of Connected Vehicles and Mobility Related Applications; European Data Protection Board: Brussels, Belgium, 2020. [Google Scholar]
- Ntalampiras, S. Detection of integrity attacks in cyber-physical critical infrastructures using ensemble modeling. IEEE Trans. Ind. Inform.
**2014**, 11, 104–111. [Google Scholar] [CrossRef] - Haller, P.; Genge, B. Using sensitivity analysis and cross-association for the design of intrusion detection systems in industrial cyber-physical systems. IEEE Access
**2017**, 5, 9336–9347. [Google Scholar] [CrossRef]

**Figure 3.**Regular and sensitive user behavior normalized data (Vehicle speed): (

**a**) regular behavior; (

**b**) sensitive behavior ${s}_{1}$ (break pressed every 30 s); (

**c**) sensitive behavior ${s}_{2}$ (stop and go every 60 s); (

**d**) sensitive behavior ${s}_{3}$ (sharp acceleration every 60 s); (

**e**) sensitive behavior ${s}_{4}$ (break and acceleration alternatively every 60 s); (

**f**) sensitive behavior ${s}_{5}$ (stop and go and acceleration alternatively every 60 s).

**Figure 4.**Generated interventions from Vehicle speed normalized data: (

**a**) landmark regular behavior; (

**b**) Intervention/attack ${a}_{1}$ (pulse attack); (

**c**) Intervention/attack ${a}_{2}$ (scaling attack); (

**d**) Intervention/attack ${a}_{3}$ (ramp attack); (

**e**) Intervention/attack ${a}_{4}$ (random attack); (

**f**) Intervention/attack ${a}_{5}$ (step attack).

**Figure 5.**Perturbation of time series (vehicle speed) normalized data using various perturbation methods: (

**a**) regular/normal behavior; (

**b**) sensitive behavior (s3); (

**c**) sensitive behavior (s4); (

**d**) regular behavior perturbed with M1; (

**e**) sensitive behavior (s3) perturbed with M1; (

**f**) sensitive behavior (s4) perturbed with M1; (

**g**) regular behavior perturbed with M3; (

**h**) sensitive behavior (s3) perturbed with M3; (

**i**) sensitive behavior (s4) perturbed with M3; (

**j**) regular behavior (s4) perturbed with M5; (

**k**) sensitive behavior (s3) perturbed with M5; (

**l**) sensitive behavior (s4) perturbed with M5.

**Figure 6.**Sensitive behavior data (vehicle speed): (

**a**) Minimum and maximum impact coefficients for all tested perturbation methods; (

**b**) Maximum and mean MAE (information loss) for all tested perturbation methods; (

**c**) The maximum probability of the real query result for all tested perturbation methods.

**Figure 7.**Minimum and maximum impact coefficients for all tested perturbation methods for (

**a**) CO${}_{2}$ flow (g/s) values; (

**b**) instant fuel economy (l/100 km) values; (

**c**) magnetometer X (µT) values.

**Figure 8.**(

**a**) Maximum MAE (information loss) for all tested perturbation methods; (

**b**) Maximum probability of the real query result for all tested perturbation methods.

Symbol | Description |
---|---|

R | Set of collected regular (typical) user behavior data |

S | Set of collected sensitive user behavior data |

B | Set of collected user behavior data (regular and sensitive), |

$B=R\cup S$ and $R\cap S=\varnothing $ | |

A | Set of intervention data (integrity attacks), $A\cap B=\varnothing $ |

$\sigma $ | Standard deviation |

${\sigma}_{{r}_{min}}$ | Minimum standard deviation of regular user behavior data, $\forall r\in R$ |

${\sigma}_{{r}_{max}}$ | Maximum standard deviation of regular user behavior data, $\forall r\in R$ |

${\sigma}_{r}$ | Standard deviation of a regular user behavior data, $r\in R$ |

${\sigma}_{s}$ | Standard deviation of a sensitive user behavior data, $s\in S$ |

${\sigma}_{a}$ | Standard deviation of an intervention data (integrity attacks), $a\in A$ |

${X}_{r}$ | Regular user behavior data, $r\in R$ |

${X}_{0}$ | Landmark regular user behavior data |

${X}_{s}$ | Sensitive user behavior data, $s\in S$ |

${X}_{b}$ | Regular or sensitive user behavior data, $b\in B$ |

${X}_{a}$ | Intervention (attack) data, $a\in A$ |

$\mathcal{M}$ | Perturbation method |

${Y}_{r}$, ${Y}_{0}$, ${Y}_{s}$, ${Y}_{b}$, ${Y}_{a}$ | Perturbed data, ${Y}_{r}=\mathcal{M}\left({X}_{r}\right)$, ${Y}_{0}=\mathcal{M}\left({X}_{0}\right)$, ${Y}_{s}=\mathcal{M}\left({X}_{s}\right)$, |

${Y}_{b}=\mathcal{M}\left({X}_{b}\right)$, ${Y}_{a}=\mathcal{M}\left({X}_{a}\right)$ | |

C | Cross-covariance |

$\mathcal{C}$ | Relative impact of a behavior data on the observed variable (attribute) |

$\overline{\mathcal{C}}$ | Mean relative impact of a behavior data on the observed |

variable (attribute) | |

${\alpha}_{p}$ | Behavior-privacy parameter |

${\alpha}_{u}$ | Behavior-utility parameter |

**Table 2.**Standard deviation for Vehicle speed collected data values and generated interventions/attacks.

Time-Series Data | Description | Standard Deviation ($\mathit{\sigma}$) |
---|---|---|

Regular behavior (${r}_{1}$) | Vehicle usage under regular driving | 0.0539 |

Regular behavior (${r}_{2}$) | Vehicle usage under regular driving | 0.0474 |

Regular behavior (${r}_{3}$) | Vehicle usage under regular driving | 0.0463 |

Regular behavior (${r}_{4}$) | Vehicle usage under regular driving | 0.0495 |

Regular behavior (${r}_{5}$)—landmark (${X}_{0}$) | Vehicle usage under regular driving | 0.0500 |

Sensitive behavior (${s}_{1}$) | Break pressed with a random intensity every 30 s | 0.0376 |

Sensitive behavior (${s}_{2}$) | Stop and go every 60 s | 0.0598 |

Sensitive behavior (${s}_{3}$) | Accelerate with a random intensity every 60 s | 0.0557 |

Sensitive behavior (${s}_{4}$) | Break and accelerate alternatively every 60 s | 0.0678 |

Sensitive behavior (${s}_{5}$) | Stop and go and accelerate alternatively every 60 s | 0.0569 |

Intervention/attack (${a}_{1}$) | Pulse attack (attack window size = 7, ${a}_{p}=20$) | 0.0701 |

Intervention/attack (${a}_{2}$) | Scaling attack (attack window size = 7, ${a}_{p}=2$) | 0.0816 |

Intervention/attack (${a}_{3}$) | Ramp attack (attack window size = 35, ${a}_{p}=0.8$) | 0.0860 |

Intervention/attack (${a}_{4}$) | Random attack (attack window size = 7, ${a}_{p}=50$) | 0.0762 |

Intervention/attack (${a}_{5}$) | Step attack (attack window size = 10, ${a}_{p}=70$) | 0.0906 |

Distortion/Perturbation Method | Notation | Number of Fourier Coefficients | Noise Size/Privacy Budget |
---|---|---|---|

No distortion/perturbation | ${\mathcal{M}}^{0}$ | - | - |

Filtered FFT | ${\mathcal{M}}^{1}$ | $k=10$ | - |

Filtered FFT | ${\mathcal{M}}^{2}$ | $k=30$ | - |

CPA Algorithm | ${\mathcal{M}}^{3}$ | $k=50$ | $discord=1.5$ |

CPA Algorithm | ${\mathcal{M}}^{4}$ | $k=65$ | $discord=2$ |

FPA Algorithm | ${\mathcal{M}}^{5}$ | $k=50$ | $\u03f5=0.9$ |

FPA Algorithm | ${\mathcal{M}}^{6}$ | $k=65$ | $\u03f5=0.5$ |

**Table 4.**Mean relative impact coefficients ${\overline{\mathcal{C}}}_{b}$ for tested perturbation methods applied on sensitive behavior data.

Sensitive Behavior | ${\mathcal{M}}^{0}$ | ${\mathcal{M}}^{1}$ | ${\mathcal{M}}^{2}$ | ${\mathcal{M}}^{3}$ | ${\mathcal{M}}^{4}$ | ${\mathcal{M}}^{5}$ | ${\mathcal{M}}^{6}$ |
---|---|---|---|---|---|---|---|

Break/30 s (${s}_{1}$) | 1.0 | 0.205 | 0.217 | 0.194 | 0.255 | 0.236 | 0.228 |

Stop/60 s (${s}_{2}$) | 1.0 | 0.266 | 0.205 | 0.353 | 0.202 | 0.235 | 0.259 |

Acceleration/60 s (${s}_{3}$) | 1.0 | 0.251 | 0.238 | 0.208 | 0.232 | 0.239 | 0.243 |

Break and acceleration | |||||||

(alternatively)/60 s (${s}_{4}$) | 1.0 | 0.184 | 0.218 | 0.364 | 0.105 | 0.228 | 0.240 |

Stop and go and acceleration | |||||||

(alternatively)/60 s (${s}_{5}$) | 1.0 | 0.257 | 0.256 | 0.334 | 0.216 | 0.245 | 0.249 |

**Table 5.**Mean relative impact coefficients ${\overline{\mathcal{C}}}_{a}$ for tested perturbation methods applied on intervention data (vehicle speed observed variable).

Intervention/Attack | ${\mathcal{M}}^{0}$ | ${\mathcal{M}}^{1}$ | ${\mathcal{M}}^{2}$ | ${\mathcal{M}}^{3}$ | ${\mathcal{M}}^{4}$ | ${\mathcal{M}}^{5}$ | ${\mathcal{M}}^{6}$ |
---|---|---|---|---|---|---|---|

Intervention/attack (${a}_{1}$) | 1.0 | 0.157 | $0.222$ | $0.249$ | $0.254$ | $0.244$ | $0.240$ |

Intervention/attack (${a}_{2}$) | 1.0 | 0.140 | $0.214$ | $0.233$ | $0.253$ | $0.235$ | $0.243$ |

Intervention/attack (${a}_{3}$) | 1.0 | 0.134 | $0.209$ | $0.231$ | $0.254$ | 0.231 | $0.237$ |

Intervention/attack (${a}_{4}$) | 1.0 | 0.138 | $0.213$ | $0.244$ | 0.263 | $0.237$ | $0.242$ |

Intervention/attack (${a}_{5}$) | 1.0 | 0.132 | $0.208$ | $0.229$ | 0.261 | 0.230 | $0.237$ |

$min\left({\overline{\mathcal{C}}}_{s}\right)$ | 1.0 | 0.184 | 0.205 | 0.194 | 0.105 | 0.228 | 0.228 |

$max\left({\overline{\mathcal{C}}}_{s}\right)$ | 1.0 | 0.266 | 0.256 | 0.364 | 0.255 | 0.245 | 0.259 |

**Table 6.**Mean absolute error (MAE) for tested perturbation methods ($\times {10}^{-2}$) applied on sensitive behavior data.

Sensitive Behavior | ${\mathcal{M}}^{0}$ | ${\mathcal{M}}^{1}$ | ${\mathcal{M}}^{2}$ | ${\mathcal{M}}^{3}$ | ${\mathcal{M}}^{4}$ | ${\mathcal{M}}^{5}$ | ${\mathcal{M}}^{6}$ |
---|---|---|---|---|---|---|---|

Break/30 s (${s}_{1}$) | 0.0 | 1.977 | 1.601 | 1.518 | 1.775 | 1.486 | 1.486 |

Stop/60 s (${s}_{2}$) | 0.0 | 3.977 | 2.832 | 2.668 | 2.650 | 2.568 | 2.577 |

Acceleration/60 s (${s}_{3}$) | 0.0 | 2.744 | 2.359 | 2.266 | 2.519 | 2.162 | 2.176 |

Break and acceleration | |||||||

(alternatively)/60 s (${s}_{4}$) | 0.0 | 3.541 | 3.027 | 3.072 | 3.028 | 2.849 | 2.796 |

Stop and go and acceleration | |||||||

(alternatively)/60 s (${s}_{5}$) | 0.0 | 3.218 | 2.454 | 2.197 | 2.276 | 2.273 | 2.248 |

Sensitive Behavior | ${\mathcal{M}}^{3}$ | ${\mathcal{M}}^{4}$ | ${\mathcal{M}}^{5}$ | ${\mathcal{M}}^{6}$ |
---|---|---|---|---|

Break/30 s (${s}_{1}$) | $4.67\times {10}^{-11}$ | $1.18\times {10}^{-8}$ | $1.31\times {10}^{-112}$ | $4.45\times {10}^{-64}$ |

Stop/60 s (${s}_{2}$) | $2.05\times {10}^{-6}$ | $3.83\times {10}^{-5}$ | $3.03\times {10}^{-39}$ | $4.91\times {10}^{-18}$ |

Acceleration/60 s (${s}_{3}$) | $2.98\times {10}^{-10}$ | $8.15\times {10}^{-9}$ | $1.53\times {10}^{-122}$ | $5.10\times {10}^{-57}$ |

Break and acceleration | ||||

(alternatively)/60 s (${s}_{4}$) | $2.06\times {10}^{-9}$ | $1.64\times {10}^{-8}$ | $9.18\times {10}^{-78}$ | $9.19\times {10}^{-37}$ |

Stop and go and | ||||

acceleration/60 s (${s}_{5}$) | $2.83\times {10}^{-7}$ | $8.87\times {10}^{-7}$ | $4.43\times {10}^{-38}$ | $4.30\times {10}^{-29}$ |

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Roman, A.-S.
Evaluating the Privacy and Utility of Time-Series Data Perturbation Algorithms. *Mathematics* **2023**, *11*, 1260.
https://doi.org/10.3390/math11051260

**AMA Style**

Roman A-S.
Evaluating the Privacy and Utility of Time-Series Data Perturbation Algorithms. *Mathematics*. 2023; 11(5):1260.
https://doi.org/10.3390/math11051260

**Chicago/Turabian Style**

Roman, Adrian-Silviu.
2023. "Evaluating the Privacy and Utility of Time-Series Data Perturbation Algorithms" *Mathematics* 11, no. 5: 1260.
https://doi.org/10.3390/math11051260