pisco_log
Privacy-preserving Federated Learning
Submission deadline: 2023-12-30
Section Collection Editors

Section Collection Information

Dear Colleagues,  With the popularity of social media and smartphones, the amount of data in various institutions such as government, hospitals, banks and e-commerce and other platforms is growing exponentially. Data holders can provide these data to cloud service providers, which can train potential data models, and provide value-added, forecasting and recommendation services for customers. During the process of data training, how to ensure the privacy of data has been paid more and more attention. In centralized learning, users need to upload data to the cloud server for centralized training, and personal data is completely public to the server. In federated learning, users do not need to upload personal data, but only need to train locally, and then send the training parameters to the cloud server for aggregation, which effectively solves the problem of user data leakage in centralized learning. However, the problem of privacy leakage still exists in current federated learning. The adversary can obtain the information of user data and the internal parameters of the model, or construct a model similar to the target model or even completely equivalent to the target model through model reverse attack, member inference attack, model extraction attack, and so on.

Thus, we are interested in privacy-preserving federated learning, which can protect data of users and the aggregation model simultaneously, and resist all kind of attacks of adversaries, such as poisoning attacks, adversarial sample attacks, and so on. Moreover, the performance of the model does not degrade in privacy-preserving federated learning compare with that of the original model.

For this, it is important to collect privacy-preserving and secure federated learning and the relevant attacks on current federated learning. Research articles and reviews in this area of study are welcome.  We look forward to receiving your contributions.


Prof. Yanli Ren

Section Editors

Keywords

Federated Learning; Privacy-Preserving; Poisoning Attacks, Adversarial Sample Attacks; Model Reverse Aattack, Member Inference Attack, Model Extraction Attack

Published Paper