chaudhuri11a0

ID:32483

大小:0.32 MB

页数:41页

时间:2023-01-29

金币:5

上传者:战必胜
Journal of Machine Learning Research 12 (2011) 1069-1109 Submitted 6/10; Revised 2/11; Published 3/11
Differentially Private Empirical Risk Minimization
Kamalika Chaudhuri KCHAUDHURI@UCSD.EDU
Department of Computer Science and Engineering
University of California, San Diego
La Jolla, CA 92093, USA
Claire Monteleoni CMONTEL@CCLS.COLUMBIA.EDU
Center for Computational Learning Systems
Columbia University
New York, NY 10115, USA
Anand D. Sarwate ASARWATE@UCSD.EDU
Information Theory and Applications Center
University of California, San Diego
La Jolla, CA 92093-0447, USA
Editor: Nicolas Vayatis
Abstract
Privacy-preserving machine learning algorithms are crucial for the increasingly common setting
in which personal data, such as medical or financial records, are analyzed. We provide general
techniques to produce privacy-preserving approximations of classifiers learned via (regularized)
empirical risk minimization (ERM). These algorithms are private under the ε-differential privacy
definition due to Dwork et al. (2006). First we apply the output perturbation ideas of Dwork et
al. (2006), to ERM classification. Then we propose a new method, objective perturbation, for
privacy-preserving machine learning algorithm design. This method entails perturbing the objective
function before optimizing over classifiers. If the loss and regularizer satisfy certain convexity and
differentiability criteria, we prove theoretical results showing that our algorithms preserve privacy,
and provide generalization bounds for linear and nonlinear kernels. We further present a privacy-
preserving technique for tuning the parameters in general machine learning algorithms, thereby
providing end-to-end privacy guarantees for the training process. We apply these results to produce
privacy-preserving analogues of regularized logistic regression and support vector machines. We
obtain encouraging results from evaluating their performance on real demographic and benchmark
data sets. Our results show that both theoretically and empirically, objective perturbation is superior
to the previous state-of-the-art, output perturbation, in managing the inherent tradeoff between
privacy and learning performance.
Keywords: privacy, classification, optimization, empirical risk minimization, support vector ma-
chines, logistic regression
1. Introduction
Privacy has become a growing concern, due to the massive increase in personal information stored
in electronic databases, such as medical records, financial records, web search histories, and social
network data. Machine learning can be employed to discover novel population-wide patterns, how-
ever the results of such algorithms may reveal certain individuals’ sensitive information, thereby
c
2011 Kamalika Chaudhuri, Claire Monteleoni and Anand D. Sarwate.
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭