[Defense] Learning to Learn via Meta-dataset Distillation
Tuesday, July 25, 2023
11:00 am - 12:00 pm
In
Partial
Fulfillment
of
the
Requirements
for
the
Degree
of
Doctor
of
Philosophy
Mikhail
Mekhedkin
Meskhi
will
defend
his
proposal
Learning
to
Learn
via
Meta-dataset
Distillation
Abstract
While existing work on meta-learning mostly focuses on building a base model with a pre-determined parameterization, our method is centered on distilling a base dataset from which new models with different parameteriza- tions can be trained and later fine-tuned on any new learning task. This allows the learning effort to be reused straightforwardly in different mod- eling contexts, i.e., model choices on new tasks. To achieve this, we explore and investigate the potential generalization of existing data summa- rization techniques by moving from a single-task context to the multi-task context of meta-learning. We demonstrate that such a mechanism can be repurposed to compress the most salient informa- tion across multiple heterogeneous datasets into a single set of meta-data points. We also propose a probabilistic data selection algorithm trained to se- lect the most relevant meta-data points for a given context, improving the model鈥檚 customizability on specific task data. Our evaluation of the proposed method on several benchmarks shows competitive performance compared to existing state-of-the-art techniques in meta-learning.
Tuesday,
July
25,
2023
11:00AM
-
12:00PM
CT
Online via
Dr. Ricardo Vilalta, Proposal Advisor
Faculty, students, and the general public are invited.
