Calendar - º£½ÇÉçÇø

º£½ÇÉçÇø

Skip to main content

[Defense] Methodology for Evaluating the Generalizability and Interpreting the Prediction of Neural Code Intelligence Models

Monday, April 3, 2023

9:00 am - 11:00 am

In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy
MD Rafiqul Islam Rabin
will defend his dissertation
Methodology for Evaluating the Generalizability and Interpreting the Prediction of Neural Code Intelligence Models


Abstract

Deep neural models are increasingly being used in various code intelligence tasks to improve developer productivity. These tasks include suggesting relevant code, detecting bugs in programs, recommending fixes, and more. Developing such models from scratch can be a complicated and time-consuming process. As a result, researchers commonly rely on existing models to solve different downstream tasks. Although successful in leveraging these models, there have been limited studies conducted on testing them. This becomes challenging as these models are usually black-box structures and depend on noise-prone data sources for learning. To reliably adapt such models, researchers often need to reason about the behavior of the underlying models and the factors that affect them. However, our understandings of how generalizable these models are on unseen data and what relevant features they learn for making predictions are largely unknown. A lack of knowledge in these areas may lead to severe consequences, especially in safety-critical applications. Moreover, state-of-the-art approaches are typically specific to a particular set of architectures and require access to the model’s parameters, which hinders reliable adoption for average researchers. To address these challenges, we have proposed model-agnostic methodologies that inspect models by analyzing input programs without accessing the model’s parameters. The overarching goal is to enhance our understanding of the models’ inference in terms of their generalizability and interpretability. Specifically, we aim to assess the ability of a model to generalize its performance with respect to noise-inducing memorization and semantic-preserving transformation. Additionally, we investigate to identify critical input features for interpreting the predictions of a model through prediction-preserving simplification. Our results suggest that neural code intelligence models are prone to memorizing noisy data with excessive parameters, are often vulnerable to very small semantic changes, and typically use a few syntactic features for making their predictions; thus, models may suffer from poor generalization performance. These observations could assist researchers in better understanding the behavior of these models and prompt them to focus their efforts on devising new techniques to alleviate the shortcomings of existing models.


Monday, April 3, 2023
9:00AM - 11:00AM CT
Online via

Dr. Mohammad Amin Alipour, Faculty Advisor

Faculty, students and the general public are invited.

Dissertation Defense Thumbnail (1 of 3)