Online Program

Return to main conference page

All Times ET

Friday, June 4
Machine Learning
Advancements in Learning
Fri, Jun 4, 11:25 AM - 1:00 PM
TBD
 

Adversarially Robust Subspace Learning (309663)

Fei Sha, University of Nebraska-Lincoln 
*Ruizhi Zhang, University of Nebraska-Lincoln 

Keywords: Subspace learning, Adversarial attack, Spiked Wishart model

Subspace learning aims to find a low-dimensional space embedded in the high-dimensional data so that the projection error is minimal. Subspace learning has many applications, such as video analytic, face recognition, subspace clustering, etc. In many of these applications, there might exist an adversary who can attack the system and modify the data with the goal of increasing the projection error of the subspaces learned from different algorithms. Under the adversarial attack, it is crucial to develop an adversarially robust subspace learning algorithm that can tolerate more attacks to increase security and safety. In this paper, we focus on the rank-one spiked Wishart model and develop an adversarially robust subspace learning method that yields the minimal worst-case projection error. Under the rank-one spiked Wishart model, we first characterized the optimal adversarial attack strategy for the adversary to maximize the projection error under certain constraints. Based on the optimal adversary’s attack strategy, we then derived our robust estimator for subspace learning by minimizing the empirical adversarial risk. We propose using the projected gradient descent algorithm to solve the optimization problem and get our estimator. Finally, we conduct extensive simulations to evaluate our adversarially robust subspace learning algorithm’s performance and compare it with subspace learned from the classical principal component analysis (PCA). Under different settings of adversarial attacks, we see our proposed method has better robustness than the PCA method under the adversarial attack but with the price of losing generality without the adversarial attack.