City Research Online

Privacy vs Utility analysis when applying Differential Privacy on Machine Learning Classifiers

Selvarathnam, M., Ragel, R., Reyes-Aldasoro, C. C. ORCID: 0000-0002-9466-2018 & Rajarajan, M. ORCID: 0000-0001-5814-9922 (2023). Privacy vs Utility analysis when applying Differential Privacy on Machine Learning Classifiers. In: 2023 19th International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob). IEEE International Conference on Wireless and Mobile Computing, Networking And Communications (WiMob), 21-23 Jun 2023, Montreal, QC, Canada. doi: 10.1109/WiMob58348.2023.10187829


In this paper, we present how Differential Privacy (DP), the recent state-of-the-art privacy-preserving technologies, plays a role with four different Machine Learning (ML) classifiers. Preserving privacy while serving utility needs is a challenge for each ML implementation. To study the effects of different DP implementations on an ML method, we do perturbation at different phases of the ML cycle, such as perturbing data at its origin (Differential Privacy Method 1 - DPM1), during the training process (DPM2) or perturbing the parameters of the ML model generated (DPM3) and see the effect of privacy preservation on ML model utility. Further, we have tested with different perturbation methods such as the Laplace, Gaussian, Analytic Gaussian, Snapping, and Staircase mechanisms for DPM1 and analysed the results to know which one works better. We tested each case considered with varying privacy budgets. We have used privacy attacks such as the Membership Inference Attack (MIA) and the Attribute Inference Attack (AIA) to see the DP’s effect in protecting data privacy. Our experiment’s results showed that perturbing at later stages of an ML method provides better utility. When considering different DPM1 mechanisms, improved Laplace and Gaussian versions work better in utility while preserving privacy.

Publication Type: Conference or Workshop Item (Paper)
Additional Information: © 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Publisher Keywords: Differential Privacy; Machine Learning; Privacy vs Utility; Privacy-preserving technologies
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
T Technology > T Technology (General)
Departments: School of Science & Technology > Computer Science
[thumbnail of Paper_1_V3.pdf]
Text - Accepted Version
Download (957kB) | Preview


Add to AnyAdd to TwitterAdd to FacebookAdd to LinkedinAdd to PinterestAdd to Email


Downloads per month over past year

View more statistics

Actions (login required)

Admin Login Admin Login