1. 陕西师范大学 计算机科学学院,陕西 西安 710199
2. 西安电子科技大学 网络与信息安全学院,陕西 西安 710071
[ "许勐璠(1989—),男,助理研究员,E-mail:[email protected]" ]
李兴华(1978—),男,教授,E-mail:[email protected]
扫 描 看 全 文
许勐璠, 李兴华. 反迁移学习的隐私保护联邦学习[J]. 西安电子科技大学学报, 2023,50(4):89-99.
许勐璠, 李兴华. 反迁移学习的隐私保护联邦学习[J]. 西安电子科技大学学报, 2023,50(4):89-99. DOI: 10.19665/j.issn1001-2400.2023.04.009.
模型窃取和梯度泄露两大攻击日益成为限制联邦学习广泛应用的瓶颈。现有基于授权的知识产权保护方案和联邦学习隐私保护方案已针对上述挑战开展了大量研究,但仍存在授权失效和计算开销大的问题。针对上述问题,提出了一种联邦学习下的模型知识产权与隐私保护方法。该方法能够在保护本地梯度隐私的同时,确保聚合后的模型授权不失效。具体来说,设计了一种基于盲化因子的轻量级梯度聚合方法,通过聚合密文盲化因子,大幅度降低加解密过程的计算开销。在此基础上,进一步提出了一种基于反迁移学习的交互式协同训练方法,在训练过程增大辅助域数据的表征向量与阻碍之间的香农互信息,实现在保护本地梯度隐私的同时,确保模型仅能被授权用户在已授权的领域使用。从理论上证明了该方案的安全性和正确性,并在公开数据集上验证了该方案的优越性。结果表明,所提方案确保联邦学习全局模型在未授权领域的性能较现有方案至少降低了约47%,计算复杂度实现了梯度维度级的降低。
The model stealing and gradient leakage attacks have increasingly become the bottlenecks that limit the broad application of federated learning.The existing authorization-based intellectual property protection schemes and privacy-preserving federated learning schemes have conducted a lot of research to solve the above challenges.However,there are still issues of authorization invalidation and high computational overhead.To solve the above problems,this paper proposes a model intellectual property and privacy-preserving method in federated learning.This method can protect the privacy of local gradients while ensuring that the aggregated model authorization is not invalidated.Specifically,a lightweight gradient aggregation method based on the blind factor is designed to significantly reduce the computational overhead of the encryption and decryption process by aggregating blinding factors.On this basis,an interactive co-training method based on anti-transfer learning is further proposed to ensure that the model can only be used by authorized users in authorized domains while protecting the privacy of local gradients,where the Shannon mutual information between the representation vector of the auxiliary domain data and the obstacle is increased.The security and correctness of the scheme are theoretically proved,and the system’s superiority is verified on the public data set.It is shown that the performance of the proposed method in the unauthorized domain is at least 47% lower than that of the existing schemes,and the computational complexity is reduced at the level of gradient dimension.
联邦学习知识产权保护反迁移学习隐私保护公钥密码学
federated learningintellectual property protectionnon-transfer learningprivacy-preservingpublic key cryptography
BAEK H, YUN W J, KWAK Y, et al. Joint Superposition Coding and Trainingfor Federated Learning Over Multi-Width Neural Networks[C]// IEEE INFOCOM 2022-IEEE Conference on Computer Communications.Piscataway:IEEE, 2022:1729-1738.
GONG X, SHARMA A, KARANAM S, et al. Preserving Privacy in Federated Learning with Ensemble Cross-Domain Knowledge Distillation[C]// 2022 Proceedings of the AAAI Conference on Artificial Intelligence. Palo Alto: AAAI, 2022,1891-1899.
SHEN Y, HE X, HAN Y, et al. Model Stealing Attacks Against Inductive Graph Neural Networks[C]// 2022 IEEE Symposium on Security and Privacy (SP).Piscataway:IEEE, 2022:1175-1192.
KARIYAPPA S, QURESHI M K. Defending Against Model Stealing Attacks with Adaptive Misinformation[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.Piscataway:IEEE, 2020:770-778.
袁程胜, 郭强, 付章杰. 基于差分隐私的深度伪造指纹检测模型版权保护算法[J]. 通信学报, 2022, 43(9):181-193. DOI:10.11959/j.issn.1000-436x.2022184http://doi.org/10.11959/j.issn.1000-436x.2022184
YUAN Chengsheng, GUO Qiang, FU Zhangjie. Copyright Protection Algorithm Based on Differential Privacy Deep Fake Fingerprint Detection Model[J]. Journal on Communications, 2022, 43(9):181-193. DOI:10.11959/j.issn.1000-436x.2022184http://doi.org/10.11959/j.issn.1000-436x.2022184
SHARMA S, ZOU J J, FANG G. A Dual Watermarking Scheme for Identity Protection[C]// 2020 Digital Image Computing:Techniques and Applications(DICTA).Piscataway:IEEE, 2022:1-30.
LIU Z, LI F, LI Z, et al. LoneNeuron:A Highly-Effective Feature-Domain Neural Trojan Using Invisible and Polymorphic Watermarks[C]// Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security. New York: ACM, 2022:2129-2143.
ALAM M, SAHA S, MUKHOPADHYAY D, et al. Deep-Lock:Secure Authorization for Deep Neural Networks(2020)[J/OL].[2020-08-13]. https://arxiv.org/abs/2008.05966. https://arxiv.org/abs/2008.05966https://arxiv.org/abs/2008.05966
CHAKRABORTY A, MONDAI A, SRIVASTAVA A. Hardware-Assisted Intellectual Property Protection of Deep Learning Models[C]// 2020 57th ACM/IEEE Design Automation Conference (DAC).Piscataway:IEEE, 2020:1-6.
WANG L, XU S, XU R, et al. Non-Transferable Learning:A New Approach for Model Ownership Verification and Applicability Authorization(2021)[J/OL].[2021-06-13]. https://arxiv.org/abs/2106.06916v1. https://arxiv.org/abs/2106.06916v1https://arxiv.org/abs/2106.06916v1
WANG J, GUO S, XIE X, et al. Protect Privacy from Gradient Leakage Attack in Federated Learning[C]// IEEE INFOCOM 2022-IEEE Conference on Computer Communications.Piscataway:IEEE, 2022:580-589.
WEI W, LIU L. Gradient Leakage Attack Resilient Deep Learning[J]. IEEE Transactions on Information Forensics and Security, 2021, 17:303-316. DOI:10.1109/TIFS.2021.3139777http://doi.org/10.1109/TIFS.2021.3139777https://ieeexplore.ieee.org/document/9666855/https://ieeexplore.ieee.org/document/9666855/
徐花, 田有亮. 差分隐私下的权重社交网络隐私保护[J]. 西安电子科技大学学报, 2022, 49(1):17-26.
XU Hua, TIAN Youliang. Protection of Privacy of the Weighted Social Network under Differential Privacy[J]. Journal of Xidian University, 2022, 49(1):17-26.
XIE H, ZHENG J, HE T, et al. MTEBDS:A Trusted Execution Environment-and-Blockchain-Supported IoT Data Sharing System[J]. Future Generation Computer Systems. 2023, 140:321-330. DOI:10.1016/j.future.2022.10.016http://doi.org/10.1016/j.future.2022.10.016https://linkinghub.elsevier.com/retrieve/pii/S0167739X22003326https://linkinghub.elsevier.com/retrieve/pii/S0167739X22003326
WU T, LI X, MIAO Y, et al. CITS-MEW:Multi-Party Entangled Watermark in Cooperative Intelligent Transportation System[J]. IEEE Transactions on Intelligent Transportation Systems, 2023, 24(3):3528-3540. DOI:10.1109/TITS.2022.3225116http://doi.org/10.1109/TITS.2022.3225116https://ieeexplore.ieee.org/document/9989512/https://ieeexplore.ieee.org/document/9989512/
TISHBY N, PEREIRA F C, BIALEK W. The Information Bottleneck Method(2000)[J/OL].[2000-04-24]. https://arxiv.org/abs/physics/0004057. https://arxiv.org/abs/physics/0004057https://arxiv.org/abs/physics/0004057
ACHILLE A, SOATTO S. Emergenceof Invariance and Disentanglement in Deep Representations[J]. The Journal of Machine Learning Research, 2018, 19(1):1947-1980.
COHEN G, AFSHAR S, TAPSON J, et al. EMNIST:Extending Mnistto Handwritten Letters[C]// 2017 International Joint Conference on Neural Networks (IJCNN).Piscataway:IEEE, 2017:2921-2926.
KRIZHEVSKY A, HINTON G. Convolutional Deep Belief Networks on Cifar-10[J]. Unpublished Manuscript, 2010, 40(7):1-9.
LIU X, LI H, XU G, et al. Privacy-Enhanced Federated Learning Against Poisoning Adversaries[J]. IEEE Transactions on Information Forensics and Security, 2021, 16:4574-4588. DOI:10.1109/TIFS.2021.3108434http://doi.org/10.1109/TIFS.2021.3108434https://ieeexplore.ieee.org/document/9524709/https://ieeexplore.ieee.org/document/9524709/
MA Z, MA J, MIAO Y, et al. ShieldFL:Mitigating Model Poisoning Attacks in Privacy-Preserving Federated Learning[J]. IEEE Transactions on Information Forensics and Security, 2022, 17:1639-1654. DOI:10.1109/TIFS.2022.3169918http://doi.org/10.1109/TIFS.2022.3169918https://ieeexplore.ieee.org/document/9762272/https://ieeexplore.ieee.org/document/9762272/
0
浏览量
0
下载量
0
CSCD
关联资源
相关文章
相关作者
相关机构