Quarterly journal published in SPbPU
and edited by prof. Dmitry Zegzhda
Peter the Great St. Petersburg Polytechnic University
Institute of computer sciences and technologies
information security of computer systems
Information Security Problems. Computer Systems
Published since 1999.
ISSN 2071-8217
PROTECTION OF THE MACHINE LEARNING MODELS FROM THE TRAINING DATA MEMBERSHIP INFERENCE
A. A. Muryleva, M. O. Kalinin, D. S. Lavrova Peter the Great St. Petersburg Polytechnic University
Annotation: The paper reviews the problem of protecting machine learning models from the security threat of violating data confidentiality, which implements membership inference in the training datasets. A method for protective noising of the training dataset is proposed. It has been experimentally shown that Gaussian noising of training dataset with scale of 0.2 is the simplest and most effective approach to protect machine learning models from the training data extraction. Compared to alternative techniques, the proposed method is easy to implement, universal for different types of target models, and allows reducing the effectiveness of attack by up to 26 % points.
Keywords: noising, machine learning, training set, membership inference, Gaussian noise
Pages 142–152