Leaky DNN: Stealing Deep-learning Model Secret with GPU Context-switching Side-channel

 November 8, 2022 at 9:58 pm

Abstract

Machine learning has been attracting strong interests in recent years. Numerous companies have invested great efforts and resources to develop customized deep-learning models, which are their key intellectual properties. In this work, we investigate to what extent the secret of deep-learning models can be inferred by attackers. In particular, we focus on the scenario that a model developer and an adversary share the same GPU when training a Deep Neural Network (DNN) model. We exploit the GPU side-channel based on context-switching penalties. This side-channel allows us to extract the fine-grained structural secret of a DNN model, including its layer composition and hyper-parameters. Leveraging this side-channel, we developed an attack prototype named MosConS, which applies LSTM-based inference models to identify the structural secret. Our evaluation of MosConS shows the structural information can be accurately recovered. Therefore, we believe new defense mechanisms should be developed to protect training against the GPU side-channel.

Method

  • Shared resource: CUPTI, NVIDIA's performance counter
  • CUDA, kernels, blocks => construct concurrency & GPU contention => A spy
  • LSTM model training => 5 models: splitting iterations; recognizing long ops; recognizing other ops; inferring hyper-parameters; model correction

Questions

  • How is GPU shared between VMs? How to avoid different drivers interfereing with each other? (especially both are root).
  • Will these features(CUPTI) be available on GPUs in data centers?