[Seminar] Building Trustworthy Machine Learning Systems under Adversarial Environments
Monday, February 13, 2023
11:00 am - 12:00 pm
Speaker
Ning Wang
Virginia Tech
Location
PGH
232
Abstract
Modern AI systems, particularly with the rise of big data and deep learning in the last decade, have greatly improved our daily life and at the same time created a long list of controversies. Generative ML models used for art creation have been used by fraudsters to generate deepfakes; well-trained language models have been shown to exhibit generalizability deficiency and intrinsic bias; machine learning (ML) models have been demonstrated to leak sensitive information about the data owners. Meanwhile, AI systems are often subject to malicious and stealthy subversion that jeopardizes their efficacy, represented by model poisoning in the training and adversarial data generation in the testing. It is evident that security, privacy, and robustness have become more important than ever for AI to gain wider adoption and societal trust.
In this talk, I will introduce two of my research efforts which are in the intersection of cybersecurity and AI: (1) using machine learning as a tool to construct novel security solutions (AI for security) and (2) securing machine learning systems against security or privacy vulnerability (security of AI). Firstly, I will describe our work on enhancing the performance of ML-based intrusion detection systems (IDS). Recently, advanced intrusions that apply adversarial example (AE) generating strategies have shown great power in misleading a well-trained IDS model to make incorrect predictions by only injecting a small perturbation into the traffic flow. We proposed a manifold and decision boundary-based detection method called MANDA. By exploring the common characteristics of AE, MANDA achieved attack-agnostic detection performance. Secondly, I introduced our work on detecting/mitigating powerful poisoning attacks in distributed learning systems, particularly federated learning (FL). FL is vulnerable to Byzantine nodes that poison the global model by sending carefully crafted local model updates to achieve its malicious goal, e.g., manipulating its credit risk. Recent poisoning attacks have advanced in stealthiness and made existing defenses useless. We developed a poisoned model detection mechanism, FLARE, that leveraged intermediate model representation to differentiate stealthy malicious models from benign ones. Lastly, I will conclude by highlighting my ongoing and future work towards trustworthiness ML, security and privacy in AI-enabled cyber-physical systems, and explainable AI.
About the Speaker
Ning Wang is a Ph.D. candidate in Computer Engineering at Virginia Tech, advised by Dr. Wenjing Lou. Her research mainly focuses on trustworthy artificial intelligence (AI), with interests in adversarial machine learning (ML), federated learning, anomaly detection, intrusion detection, differential privacy, and intelligent IoT. Ning has published her research in top venues for computer networking and security, including INFOCOM, ACSAC, AsiaCCS, and TDSC. Her research goal is to solve security and privacy challenges in AI systems and develop ML-based solutions for critical security applications.
