当前位置>主页 > 期刊在线 > 信息安全 >

信息安全2021年1期

联邦学习的隐私保护技术研究
石进,周颖,邓家磊
(电科云(北京)科技有限公司,北京 100041)

摘  要:联邦学习作为一种新兴的人工智能计算框架,旨在解决分布式环境下数据安全交换与隐私保护,然而联邦学习在应用时仍然存在安全问题。鉴于此,文章从多个层面分析联邦学习的隐私安全问题,并针对性地提出了防御措施;面向联邦学习安全高速数据交换,提出了一种基于改进同态加密算法的联邦学习模型,为联邦学习落地实施提供借鉴和参考。


关键词:联邦学习;用户隐私;数据安全;同态加密



中图分类号:TP309;TP181         文献标识码:A         文章编号:2096-4706(2021)01-0138-05


Study on Privacy Protection Techniques of Federated Learning

SHI Jin,ZHOU Ying,DENG Jialei

(Diankeyun(Beijing)Technology Co.,Ltd.,Beijing 100041,China)

Abstract:As a new artificial intelligent computing framework,federated learning aims to solve the problem of data safety exchange and privacy protection in distributed environment. However,federated learning still has security problems in application. In view of this,the paper analyzes the privacy security issues of federated learning from multiple levels and contrapuntally puts forward defensive measures. A federated learning model based on improved homomorphism encryption algorithm is proposed for high-speed data exchange of federated learning security,which provides reference for the implementation of federated learning.

Keywords:federated learning;user privacy;data security;homomorphism encryption


参考文献:

[1] MCMAHAN H B,MOORE E,RAMAGE D,et al. FederatedLearning of Deep Networks using Model Averaging [J/OL].arXiv:1602.05629 [cs.LG].(2016-02-17).https://arxiv.org/abs/1602.05629v1.

[2] M C M A H A N H B , M O O R E E , R A M A G E D , e tal. Communication-Efficient Learning of Deep Networks fromDecentralized Data [J/OL].arXiv:1602.05629 [cs.LG].(2017-02-28).https://arxiv.org/abs/1602.05629.

[3] 王健宗,孔令炜,黄章成,等. 联邦学习算法综述 [J]. 大数据,2020,6(6):64-82.

[4] SHOKRI R,STRONATI M,SONG C Z,et al. MembershipInference Attacks Against Machine Learning Models [C]//2017 IEEESymposium on Security and Privacy(SP).San Jose:IEEE,2017:3-18.

[5] SALEM A,ZHANG Y,HUMBERT M,et al. ML-Leaks:Model and Data Independent Membership Inference Attacks andDefenses on Machine Learning Models [J/OL].arXiv:1806.01246 [cs.CR].(2018-12-14).https://arxiv.org/abs/1806.01246.

[6] YEOM S,GIACOMELLI I,FREDRIKSON M,et al.Privacy Risk in Machine Learning:Analyzing the Connection toOverfitting [C]//2018 IEEE 31st Computer Security FoundationsSymposium(CSF).Oxford:IEEE,2018:268-282.

[7] FREDRIKSON M,LANTZ E,JHA S,et a1. Privacy inpharmacogenetics:an end-to-end case study of personalized warfarindosing [C]//Proceedings of the 23rd USENIX conference on SecuritySymposium.Berkeley:USENIX Association,2014:17-32.

[8] FREDRIKSON M,JHA S,RISTENPART T.ModelInversion Attacks that Exploit Confidence Information and BasicCountermeasures [C]//Proceedings of the 22nd ACM SIGSACConference on Computer and Communications Security.New York:Association for Computing Machinery,2015:1322-1333.

[9] LOWD D,MEEK C.Adversarial learning [C]//Proceedingsof the eleventh ACM SIGKDD international conference on Knowledgediscovery in data mining.New York:Association for ComputingMachinery,2005:641-647.

[10] TRAMÈR F,ZHANG F,JUEIS A,et al. Stealingmachine learning models via prediction APIs [C]// Proceedings ofthe 25th USENIX Conference on Security Symposium.Berkeley:USENIX Association,2016:601-618.

[11] GOODFELLOW I J,SHLENS J,SZEGEDY C. Explainingand harnessing adversarial examples [J/OL].arXiv:1412.6572 [stat.ML].(2014-12-20).https://arxiv.org/abs/1412.6572.

[12] KURAKIN A,GOODFELLOW I,BENGIO S. Adversarialexamples in the physical world [J/OL].arXiv:1607.02533 [cs.CV].(2017-02-11).https://arxiv.org/abs/1607.02533v4.

[13] PAPERNOT N,MCDANIEL P,JHA S,et al. TheLimitations of Deep Learning in Adversarial Settings [C]//2016IEEE European Symposium on Security and Privacy(EuroS &P).Saarbruecken:IEEE,2016:372-387.

[14] MOOSAVI-DEZFOOLI S M,FAWZI A,FROSSARDP. DeepFool:A Simple and Accurate Method to Fool Deep NeuralNetworks [C]//2016 IEEE Conference on Computer Vision and PatternRecognition(CVPR),Las Vegas:IEEE,2016:2574-2582.

[15] CHEN X Y,LIU C,LI B,et al. Targeted Backdoor Attackson Deep Learning Systems Using Data Poisoning [J/OL].arXiv:1712.05526 [cs.CR].(2017-12-15).https://arxiv.org/abs/1712.05526v1.

[16] BIGGIO B,NELSON B,LASKOV P. Poisoning Attacksagainst Support Vector Machines [J/OL].arXiv:1206.6389 [cs.LG].(2013-03-15).https://arxiv.org/abs/1206.6389?context=cs.

[17] MUÑOZ-GONZÁLEZ L,BIGGIO B,DEMONTIS A,etal. Towards Poisoning of Deep Learning Algorithms with Back-gradientOptimization [C]//Proceedings of the 10th ACM Workshop on ArtificialIntelligence and Security.New York:Association for ComputingMachinery,2017:27-38.

[18] BHAGOJI A N,CHAKRABORTY S,MITTAL P,et al. Analyzing Federated Learning through an Adversarial Lens [J/OL].arXiv:1811.12470 [cs.LG].(2019-11-25).https://arxiv.org/abs/1811.12470.

[19] JAYARAMAN B,EVANS D. Evaluating DifferentiallyPrivate Machine Learning in Practice [J/OL].arXiv:1902.08874 [cs.LG].(2019-02-24).https://arxiv.org/abs/1902.08874.

[20] KIFER D,MACHANAVAJJHAIA A.No free lunch indata privacy [C]//Proceedings of the 2011 ACM SIGMOD InternationalConference on Management of Data.New York:ACM,2011:193-204.


作者简介:石进(1989—),男,汉族,河南驻马店人,助理工程师,硕士,研究方向:网络安全。