Calendar - º£½ÇÉçÇø

º£½ÇÉçÇø

Skip to main content

[Seminar] Edge-AI in the Making: Algorithm Design and Theory

Friday, February 24, 2023

11:00 am - 12:00 pm

Speaker

Sen Lin, Ph.D.

NSF AI-EDGE Institute at Ohio State University

Location
PGH 232

Abstract

Edge-AI has recently emerged as a confluence between AI and edge computing. By processing and analyzing data close to Internet-of-Things (IoT) devices at the network edge, Edge-AI carries the promise of providing intelligent services at any place and any time, while protecting the privacy and security of the data. However, achieving Edge-AI is highly nontrivial due to the nature of edge networks, e.g., edge devices are resource-constrained and tasks evolve in dynamically changing environments. Departing from conventional machine learning paradigms, which are often data-hungry and trained for given tasks, Edge-AI need to be more efficient and adaptive.

In this talk, I will present our recent findings towards efficient Edge-AI from the lens of continual learning (i.e., CL), in both algorithm designs and theory. First, I will introduce the algorithm designs to enable CL in Edge-AI in a privacy-preserving manner. Based on an efficient characterization of task correlation, we develop principled methods to improve the knowledge transfer in CL, which achieve `negative’ forgetting for the first time using a fixed capacity network without data replay. More importantly, our proposed techniques can be potentially leveraged in `machine unlearning’ to further protect the privacy and security of user data in Edge-AI. Next, I will present our theoretical work towards explainable CL. Although there have been significant efforts in experimental studies to address the forgetting issue, the theoretical understanding of CL is still in the early stage. Our theoretical analysis, under overparameterized linear models, provides the first-known explicit form of the expected forgetting and generalization error. Further analysis of such a key result yields a number of theoretical explanations about how overparameterization, task similarity, and task ordering affect both forgetting and generalization error of CL. More interestingly, we show that some of these insights can be even carried over to practical setups on real datasets using deep neural networks, which would open up many interesting directions for future studies in CL.

Ìý

About the Speaker

Sen Lin is a Postdoc Scholar in the NSF AI-EDGE Institute at Ohio State University. He received his Ph.D. degree in Electrical Engineering from Arizona State University. His research interests broadly fall in the intersection of machine learning and wireless networking. Currently, his research focuses on developing algorithms and theories in continual learning, meta-learning, reinforcement learning, adversarial machine learning and bilevel optimization, with applications in multiple domains, e.g., edge computing, security, network control. His research results have been published in top conferences and journals, including NeurIPS, ICLR, AAMAS, Mobihoc, INFOCOM, IEEE ToN and IEEE TNNLS. He also coauthored a book on Edge-AI (Morgan & Claypool Publishers, 2020). The recognition his papers have received includes the WiOpt’18 Best Student Paper Award and Spotlight presentation in ICLR 2022.

2023-02-24 Seminar