Quarterly journal published in SPbPU
and edited by prof. Dmitry Zegzhda
Peter the Great St. Petersburg Polytechnic University
Institute of computer sciences and technologies
information security of computer systems
Information Security Problems. Computer Systems
Published since 1999.
ISSN 2071-8217
PROTECTION AGAINST THE THREAT OF THE MACHINE LEARNING MODELS EXTRACTION
M. D. Soshnev, M. O. Kalinin Peter the Great St. Petersburg Polytechnic University
Annotation: The threat of extraction of the machine learning models is considered. Most of the modern approaches to the prevention of machine learning models extraction are based on the use of the protective noising mechanism. The main disadvantage of this protective method is the decrease in the accuracy of the outputs generated by the protected model. The paper states the requirements for methods for protecting machine learning models against extraction and presents a new method, which supplements noise with a distillation stage. It has been experimentally shown that the developed method ensures the resistance of machine learning models to extraction while maintaining the quality of their results by transforming the protected models to other, the simplified, but equivalent, models.
Keywords: machine learning security, model distillation, noising, soft label, degree of security, accuracy of results, model extraction threat.
Pages 95-107