Systems supported by machine learning (ML) algorithms have brought significant benefit to our daily life. With the growing deployment of such systems, the security of them has become a major concern in many application domains. This project aims to address the security concern in three main directions:
i) analysing the adversarial ML from the game theoretic view,
ii) expanding the adversarial ML to take into account more complex learning paradigms and
iii) considering adversarial ML on graph-structured data.
This project benefits the ML research by providing frameworks for identifying the vulnerability of ML algorithms and developing defense strategies to make ML more secure. Moreover, this project builds connection between game theory and ML research by modelling the attackers and learners as game-players, which enriches the game theoretic frameworks. In addition, this project will develop novel optimization techniques to compute attack and defense strategies, which also enriches the optimization research.