河南理工大学 计算机科学与技术学院,河南 焦作 454003
[ "宋 成(1980—),男,副教授,博士,E-mail:[email protected];" ]
程道晨(1999—),男,河南理工大学硕士研究生,E-mail:[email protected]
[ "彭维平(1979—),男,教授,博士,E-mail:[email protected]" ]
扫 描 看 全 文
宋成, 程道晨, 彭维平. 一种高效的联邦学习隐私保护方案[J]. 西安电子科技大学学报, 2023,50(5):178-187.
宋成, 程道晨, 彭维平. 一种高效的联邦学习隐私保护方案[J]. 西安电子科技大学学报, 2023,50(5):178-187. DOI: 10.19665/j.issn1001-2400.20230403.
联邦学习允许客户端在只共享梯度的情况下联合训练模型,而不是直接将训练数据提供给服务器。尽管联邦学习避免将数据直接暴露给第三方,对于数据起着一定保护作用,但研究表明,联邦学习场景下传输的梯度依然会导致隐私信息泄露。然而在训练过程中采用加密方案带来的计算和通信开销又会影响训练效率,并且难以适用于资源受限的环境。针对当前联邦学习中隐私保护方案存在的安全与效率等问题,结合同态加密和压缩技术,提出一种安全高效的联邦学习隐私保护方案。通过优化同态加密算法,确保方案安全性的基础上,减少运算次数,提高运算效率;同时设计一种梯度过滤压缩算法,过滤掉与全局模型收敛趋势不相关的本地更新,并采用计算可忽略的压缩操作符量化更新参数,以在保证模型准确率的基础上提高通信效率。安全性分析表明,方案满足不可区分性,数据隐私性和模型安全性等安全特性。实验结果显示,方案不仅有较高模型准确率,而且在通信开销与计算开销方面较现有方案也有明显优势。
Federated learning allows clients to jointly train models with only shared gradients,rather than directly feeding the training data to the server.Although federated learning avoids exposing data directly to third parties and plays a certain role in protecting data,research shows that the transmission gradient in federated learning scenarios will still lead to the disclosure of private information.However,the computing and communication overhead brought by the encryption scheme in the training process will affect the training efficiency,and it is difficult to apply to resource-constrained environments.Aiming at the security and efficiency problems of privacy protection schemes in current federated learning,a safe and efficient privacy protection scheme for federated learning is proposed by combining homomorphic encryption and compression techniques.The homomorphic encryption algorithm is optimized to ensure the security of the scheme,reduce the number of operations and improve the efficiency of operations.At the same time,a gradient filtering compression algorithm is designed to filter out the local updates that are not related to the convergence trend of the global model,and the update parameters are quantized by a computationally negligible compression operator,which ensures the accuracy of the model and increases the communication efficiency.The security analysis shows that the scheme satisfies the security characteristics such as indistinguishability,data privacy and model security.Experimental results show that the proposed scheme has not only higher model accuracy,but also obvious advantages over the existing schemes in terms of communication cost and calculation cost.
联邦学习隐私保护技术同态加密自然压缩
federated learningprivacy-preserving techniqueshomomorphic encryptionnatural compression
MCMAHAN B, MOORE E, RAMAGE D, et al. Communication-Efficient Learning of Deep Networks from Decentralized Data[C]//Artificial Intelligence and Statistics. New York: PMLR, 2017:1273-1282.
LI Q B, WEN Z Y, WU Z M, et al. A Survey on Federated Learning Systems:Vision,Hype and Reality for Data Privacy and Protection[J]. IEEE Transactions on Knowledge and Data Engineering, 2023, 35(4):3347-3366. DOI:10.1109/TKDE.2021.3124599http://doi.org/10.1109/TKDE.2021.3124599https://ieeexplore.ieee.org/document/9599369/https://ieeexplore.ieee.org/document/9599369/
YIN X F, ZHU Y M, HU J K. A Comprehensive Survey of Privacy-Preserving Federated Learning:A Taxonomy,Review,and Future Directions[J]. ACM Computing Surveys (CSUR), 2021, 54(6):1-36.
WANG Z, SONG M, ZHANG Z, et al. Beyond Inferring Class Representatives:User-Level Privacy Leakage from Federated Learning[C]//IEEE INFOCOM 2019-IEEE Conference on Computer Communications. Piscataway:IEEE, 2019:2512-2520.
ZHU L G, LIU Z J, HAN S. Deep Leakage from Gradients[J]. Advances in Neural Information Processing Systems, 2019, 32:14774-14784.
ABADI M, CHU A, GOODFELLOW I, et al. Deep Learning with Differential Privacy[C]//Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. New York: ACM, 2016:308-318.
陈律君, 肖迪, 余柱阳, 等. 基于秘密共享和压缩感知的通信高效联邦学习[J]. 计算机研究与发展, 2022, 59(11):2395-2407.
CHEN Lüjun, XIAO Di, YU Zhuyang, et al. Communication-Efficient Federated Learning Based on Secret Sharing and Compressed Sensing[J]. Journal of Computer Research and Development, 2022, 59(11):2395-2407.
李文华, 董丽华, 曾勇. key-nets同态加密方案的安全性分析及改进[J]. 西安电子科技大学学报, 2023, 50(1):192-202.
LI Wenhua, DONG Lihua, ZENG Yong. Analysis and Improvement of the Security of the Key-Nets Homomorphic Encryption Scheme[J]. Journal of Xidian University, 2023, 50(1):192-202.
EL OUADRHIRI A, ABDELHADI A. Differential Privacy for Deep and Federated Learning:A Survey[J]. IEEE Access, 2022, 10:22359-22380. DOI:10.1109/ACCESS.2022.3151670http://doi.org/10.1109/ACCESS.2022.3151670https://ieeexplore.ieee.org/document/9714350/https://ieeexplore.ieee.org/document/9714350/
PHONG L T, AONO Y, HAYASHI T, et al. Privacy-Preserving Deep Learning via Additively Homomorphic Encryption[J]. IEEE Transactions on Information Forensics and Security, 2018, 13(5):1333-1345. DOI:10.1109/TIFS.2017.2787987http://doi.org/10.1109/TIFS.2017.2787987http://ieeexplore.ieee.org/document/8241854/http://ieeexplore.ieee.org/document/8241854/
刘艺璇, 陈红, 刘宇涵, 等. 联邦学习中的隐私保护技术[J]. 软件学报, 2022, 33(3):1057-1092.
LIU Yixuan, CHEN Hong, LIU Yuhan, et al. Privacy-Preserving Techniques in Federated Learning[J]. Journal of Software, 2022, 33(3):1057-1092.
GHIMIRE B, RAWAT D B. RecentAdvances on Federated Learning for Cybersecurity and Cybersecurity for Federated Learning for Internet of Things[J]. IEEE Internet of Things Journal, 2022, 9(11):8229-8249. DOI:10.1109/JIOT.2022.3150363http://doi.org/10.1109/JIOT.2022.3150363https://ieeexplore.ieee.org/document/9709603/https://ieeexplore.ieee.org/document/9709603/
ZHANG X L, FU A M, WANG H Q, et al. A Privacy-Preserving and Verifiable Federated Learning Scheme[C]//ICC 2020 IEEE International Conference on Communications (ICC).Piscataway:IEEE, 2020:1-6.
ASAD M, MOUSTAFA A, ASLAM M. CEEP-FL:A Comprehensive Approach for Communication Efficiency and Enhanced Privacy in Federated Learning[J]. Applied Soft Computing, 2021, 104:107235. DOI:10.1016/j.asoc.2021.107235http://doi.org/10.1016/j.asoc.2021.107235https://linkinghub.elsevier.com/retrieve/pii/S1568494621001587https://linkinghub.elsevier.com/retrieve/pii/S1568494621001587
WANG L P, WANG W, LI B. CMFL:Mitigating Communication Overhead for Federated Learning[C]//2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS).Piscataway:IEEE, 2019:954-964.
HORVÓTH S, HO C Y, HORVATH L, et al. Natural Compression for Distributed Deep Learning[C]//Mathematical and Scientific Machine Learning. New York: PMLR, 2022:129-141.
CATALANO D, GENNARO R, HOWGRAVE-GRAHAM N, et al. Paillier’s Cryptosystem Revisited[C]//Proceedings of the 8th ACM Conference on Computer and Communications Security. New York: ACM, 2001:206-214.
SIPSER M. Introduction to the Theory of Computation[J]. ACM Sigact News, 1996, 27(1):27-29. DOI:10.1145/230514.571645http://doi.org/10.1145/230514.571645https://dl.acm.org/doi/10.1145/230514.571645https://dl.acm.org/doi/10.1145/230514.571645
LINDNER R, PEIKERT C. Better Key Sizes (and Attacks) for LWE-based Encryption[C]//Cryptographers’ Track at the RSA Conference. Heidelberg:Springer, 2011:319-339.
HUANG X X, DING Y, JIANG Z L, et al. DP-FL:A Novel Differentially Private Federated Learning Framework for the Unbalanced Data[J]. World Wide Web, 2020, 23(4):2529-2545. DOI:10.1007/s11280-020-00780-4http://doi.org/10.1007/s11280-020-00780-4
0
浏览量
3
下载量
0
CSCD
关联资源
相关文章
相关作者
相关机构