PSGD: A Privacy-preserving Stochastic Gradient Descent Method During Protection Transitions


Approach overview

Advances in privacy-enhancing technologies, such as context-aware privacy and personalized privacy models, have paved the way for the successful management of the data utility-privacy trade-off. This is achieved by adapting the level of data protection according to current privacy needs in order to maximize the usefulness of data. However, continuously balancing the protection level of generated spatio-temporal data without considering the temporal correlations that exist among this data, might lead to temporal privacy leakage during protection transitions, making thus these technologies vulnerable to inference attacks. To overcome this issue, we propose in this paper a novel stochastic gradient descent method to protect user privacy during protection transitions. This solution aims to minimize gaps in the level of precision between sequential protected data by iteratively decreasing the level of protection until the newly-desired level is reached. It manages dependencies between sequential data values and supports attribute diversity, multi-attribute handling, and protection function diversity/dynamicity. Our solution is therefore generic and re-usable by various privacy-preserving models that enable protection variation. We validated our proposal and evaluated its performance. Results show that our approach delivers scalability and low computational and storage complexity.


Research Areas

  • Privacy Enhancing Technologies
  • Data Privacy
  • Stochastic Gradient Descent Methods
  • Context-awareness
  • Internet of Things

PhD Student

Karam Bou Chaaya

Project Members

  • Richard Chbeir
  • Philippe Arnould
  • Mahmoud Barhamgi
  • Djamal Benslimane

PSGD Prototype