Journal of Machine Learning Research 12 (2011) 1069-1109 Submitted 6/10; Revised 2/11; Published 3/11
Differentially Private Empirical Risk Minimization
Kamalika Chaudhuri KCHAUDHURI@UCSD.EDU
Department of Computer Science and Engineering
University of California, San Diego
La Jolla, CA 92093, USA
Claire Monteleoni CMONTEL@CCLS.COLUMBIA.EDU
Center for Computational Learning Systems
Columbia University
New York, NY 10115, USA
Anand D. Sarwate ASARWATE@UCSD.EDU
Information Theory and Applications Center
University of California, San Diego
La Jolla, CA 92093-0447, USA
Editor: Nicolas Vayatis
Abstract
Privacy-preserving machine learning algorithms are crucial for the increasingly common setting
in which personal data, such as medical or financial records, are analyzed. We provide general
techniques to produce privacy-preserving approximations of classifiers learned via (regularized)
empirical risk minimization (ERM). These algorithms are private under the ε-differential privacy
definition due to Dwork et al. (2006). First we apply the output perturbation ideas of Dwork et
al. (2006), to ERM classification. Then we propose a new method, objective perturbation, for
privacy-preserving machine learning algorithm design. This method entails perturbing the objective
function before optimizing over classifiers. If the loss and regularizer satisfy certain convexity and
differentiability criteria, we prove theoretical results showing that our algorithms preserve privacy,
and provide generalization bounds for linear and nonlinear kernels. We further present a privacy-
preserving technique for tuning the parameters in general machine learning algorithms, thereby
providing end-to-end privacy guarantees for the training process. We apply these results to produce
privacy-preserving analogues of regularized logistic regression and support vector machines. We
obtain encouraging results from evaluating their performance on real demographic and benchmark
data sets. Our results show that both theoretically and empirically, objective perturbation is superior
to the previous state-of-the-art, output perturbation, in managing the inherent tradeoff between
privacy and learning performance.
Keywords: privacy, classification, optimization, empirical risk minimization, support vector ma-
chines, logistic regression
1. Introduction
Privacy has become a growing concern, due to the massive increase in personal information stored
in electronic databases, such as medical records, financial records, web search histories, and social
network data. Machine learning can be employed to discover novel population-wide patterns, how-
ever the results of such algorithms may reveal certain individuals’ sensitive information, thereby
c
2011 Kamalika Chaudhuri, Claire Monteleoni and Anand D. Sarwate.