A PROTECTION METHOD FOR THE GLOBAL MODEL OF THE FEDERATED LEARNING SYSTEMS BASED ON A TRUST MODEL
V. M. Krundyshev, V. K. Cheskidov, M. O. Kalinin Peter the Great St. Petersburg Polytechnic University
Annotation: The paper reviews the problem of ensuring the security of a global model in the federated learning systems. The proposed protection method, based on data verification using a trusted group of nodes, ensures that only correct updates are considered during the aggregation of the global model. As a result of experimental study, it is demonstrated that the developed method ensures high speed of identification and isolation of adversarial clients implementing label substitution and noise imposition
Keywords: artificial intelligence security, model security, noise imposition, trust model, label substitution, federated learning