Machine Unlearning Scheme for Recommendation System Based on Gradient Attribution

Authors

  • Weiping Peng
  • Quanjiang Liu
  • Di Ma
  • Cheng Song
  • Yinmeng Jiang

DOI:

https://doi.org/10.54097/gqxpaf75

Keywords:

Machine Unlearning, Gradient Attribution, Recommendation System, Bert, Privacy Protection Technology

Abstract

Addressing the current data privacy leakage challenges faced by BERT-based sequential recommendation systems, this study innovatively proposes a gradient attribution-based machine unlearning scheme for recommendation systems. The scheme first leverages gradient attribution techniques to precisely evaluate the contribution of each intermediate neuron to the prediction of user behavior information. Based on these evaluation results, we adopt a filtering strategy designed to identify and remove redundant neurons while retaining those "privacy neurons" closely associated with specific sensitive user behavior information. Subsequently, the activation values of these identified privacy neurons are set to zero, thereby completely eliminating their contribution to model prediction. This effectively achieves the goal of erasing specific categories of user behavior information from the recommendation system model. Compared to traditional model retraining methods, this scheme offers a significant advantage in unlearning speed and has a minimal impact on the overall performance of the recommendation system, providing an efficient and practical new approach to addressing data privacy issues in sequential recommendation systems.

Downloads

Download data is not yet available.

References

[1] Fuyu Lv, Taiwei Jin, Changlong Yu, Fei Sun, Quan Lin, Keping Yang, and Wilfred Ng. 2019. SDM: Sequential Deep Matching Model for Online Large-Scale Recommender System. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management (Beijing, China). Association for Computing Machinery, New York, NY, USA, 2635–2643

[2] Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra, Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, Rohan Anil, Zakaria Haque, Lichan Hong, Vihan Jain, Xiaobing Liu, and Hemal Shah.2016. Wide & Deep Learning for Recommender Systems. In Proceedings of the 1st Workshop on Deep Learning for Recommender Systems (Boston, MA, USA).Association for Computing Machinery, New York, NY, USA, 7–10.

[3] Chong Chen, Min Zhang, Yiqun Liu, and Shaoping Ma. 2019. Social Attentional Memory Network: Modeling Aspect-and Friend-level Differences in Recommendation. In Proceedings of WSDM.

[4] Chong Chen, Min Zhang, Chenyang Wang, Weizhi Ma, Minming Li, Yiqun Liu, and Shaoping Ma. 2019. An Efficient Adaptive Transfer Neural Network for Social-aware Recommendation. In Proceedings of SIGIR. 225–234.

[5] Minxing Zhang, Zhaochun Ren, Zihan Wang, Pengjie Ren, Zhunmin Chen, Pengfei Hu, and Yang Zhang. 2021. Membership Inference Attacks Against Recommender Systems. , 16 pages.

[6] Santiago Zanella-Béguelin, Lukas Wutschitz, Shruti Tople, Victor Rühle, Andrew Paverd, Olga Ohrimenko, Boris Köpf, and Marc Brockschmidt. 2020. Analyzing Information Leakage of Updates to Natural Language Models. Association for Computing Machinery, New York, NY, USA, 363–375.

[7] Chenyang Wang, Weizhi Ma, Min Zhang, Chong Chen, Yiqun Liu, and Shaoping Ma. 2020. Toward Dynamic User Intention: Temporal Evolutionary Effects of Item Relations in Sequential Recommendation. ACM Transactions on Information Systems (TOIS) 39, 2 (2020), 1–33

[8] Lucas Bourtoule, Varun Chandrasekaran, Christopher A Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, and Nicolas Papernot. 2021. Machine unlearning. In 2021 IEEE Symposium on Security and Privacy (SP). IEEE, 141–159.

[9] Min Chen, Zhikun Zhang, Tianhao Wang, Michael Backes, Mathias Humbert, and Yang Zhang. 2021. Graph Unlearning. arXiv preprint arXiv: 2103.14991 (2021).

[10] Wei Yuan, Hongzhi Yin, Fangzhao Wu, shijie zhang, Tieke He, Hao Wang. Federated Unlearning for On-Device Recommendation [C]. In: Proceedings of the 16th ACM International Conference on Web Search and Data Mining (WSDM 2023), 2023.

[11] Guy Shani, David Heckerman, and Ronen I Brafman. 2005. An MDP-based recommender system. JMLR 6, Sep (2005), 1265–1295.

[12] Steffen Rendle, Christoph Freudenthaler, and Lars Schmidt-Thieme. 2010. Factorizing personalized Markov chains for next-basket recommendation. In Proceedings of WWW’10. ACM, Raleigh, North Carolina, USA, 811–820.

[13] Pengfei Wang, Jiafeng Guo, Yanyan Lan, Jun Xu, Shengxian Wan, and Xueqi Cheng. 2015. Learning Hierarchical Representation Model for NextBasket Recommendation. In Proceedings of ACM SIGIR’15. ACM, Santiago, Chile, 403–412.

[14] Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to Sequence Learning with Neural Networks. In Proceedings of NIPS’14 (December 08 - 13). MIT Press, Montreal, Canada, 3104–3112.

[15] Wang-Cheng Kang and Julian McAuley. [n. d.]. Self-Attentive Sequential Recommendation. In Proceedings of ICDM. 197–206.

[16] Sun F, Liu J, Wu J, et al. BERT4Rec: Sequential recommendation with bidirectional encoder representations from transformer [C]//Proceedings of the 28th ACM International Conference on Information and Knowledge Management. 2019: 1441-1450.

[17] I. Islek and S. G. Oguducu, ‘‘A hybrid recommendation system based on bidirectional encoder representations,’’ in Proc. Joint Eur . Conf. Mach.Learn. Knowl. Discovery Databases. Cham, Switzerland: Springer, 2020, pp. 225–236.

[18] B. Juarto and A. S. Girsang, ‘‘Neural collaborative with sentence BERT for news recommender system,’’ Int. J. Informat. Vis., vol. 5, no. 4, pp. 448–455, 2021.

[19] F. Sun, J. Liu, J. Wu, C. Pei, X. Lin, W. Ou, and P. Jiang, ‘‘BERT4Rec: Sequential recommendation with bidirectional encoder representations from transformer,’’ in Proc. 28th ACM Int. Conf. Inf. Knowl. Manage.New York, NY, USA: Association for Computing Machinery, Nov. 2019, pp. 1441–1450.

[20] Mukund Sundararajan, Ankur Taly, and Qiqi Y an. 2017. Axiomatic attribution for deep networks. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, volume 70 of Proceedings of Machine Learning Research, pages 3319–3328. PMLR.

[21] Y aru Hao, Li Dong, Furu Wei, and Ke Xu. 2021. Self-attention attribution: Interpreting information interactions inside transformer. In The Thirty-Fifth AAAI Conference on Artificial Intelligence. AAAI Press.

[22] Yinzhi Cao and Junfeng Yang. Towards making systems for-get with machine unlearning. In 2015 IEEE Symposium onSecurity and Privacy, pages 463–480. IEEE, 2015. 1, 8

[23] Gaoyang Liu, Y ang Yang, Xiaoqiang Ma, Chen Wang, and Jiangchuan Liu. Federated unlearning. arXiv preprint arXiv: 2012.13891, 2020. 1, 9

[24] Chen C, Sun F, Zhang M et al. Recommendation Unlearning [J]. 2022. DOI:10.48550/arXiv.2201.06820.

[25] Li Y, Zheng X, Chen C et al. Making Recommender Systems Forget: Learning and Unlearning for Erasable Recommendation [J]. 2022. DOI:10.48550/arXiv.2203.11491.

[26] Mor Geva, Roei Schuster, Jonathan Berant, and Omer Levy. 2020. Transformer feed-forward layers are key-value memories. CoRR, abs/2012.14913.

[27] Dai, Damai et al. "Knowledge neurons in pretrained transformers." ACL 2022.

[28] Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, and Domonkos Tikk.2016. Session-based Recommendations with Recurrent Neural Networks. In Proceedings of ICLR.

[29] Jiaxi Tang and Ke Wang. 2018. Personalized Top-N Sequential Recommendation via Convolutional Sequence Embedding. In Proceedings of WSDM. 565–573.

Downloads

Published

27-08-2025

Issue

Section

Articles