[Defense] Battling Deception in the Era of Large Language Models
Monday, November 4, 2024
3:00 pm - 4:30 pm
In
Partial
Fulfillment
of
the
Requirements
for
the
Degree
of
Doctor
of
Philosophy
Sadat
Shahriar
will
defend
his
proposal
Battling
Deception
in
the
Era
of
Large
Language
Models
Abstract
The rise of large language models (LLMs) has brought tremendous advancements, but it also poses a serious threat in the form of widespread misinformation. A recent study highlights a tenfold increase in websites hosting propaganda news, signaling the scale of this issue. With the accessibility and surprising capabilities of LLMs, nearly anyone can generate false information and disseminate it online. AI-generated synthetic content can often be misinformation even though it increasingly resembles authentic news. This makes detection more challenging than existing state of art (SOTA) deception detection. This research focuses on combating AI-generated synthetic/fake content by proposing novel techniques that outperform the SOTA deception detection technologies. We explore methods to identify the subtle signals embedded in AI-generated content and investigate different forms of textual deception, including fake news, pink slime journalism, social media rumors, collusion scams, and phishing schemes . Our studies also focus on the transferability of features across domains, introducing novel approaches such as feature-augmented soft domain transfer to improve the performance of NLP models. By leveraging domain-specific knowledge, this research enhances the detection of deceptive language and offers broader applications for cross-domain models in natural language tasks. Additionally, we examine the role of quantifying psychological traits in unraveling deceptive content. Ultimately, this work proposes a comprehensive framework for detecting AI-generated misinformation and offers strategies to navigate the rapidly evolving landscape of online misinformation in the age of AI.
Monday,
November
4,
2024
3:00
PM
-
4:30
PM
PGH 550
Dr. Arjun Mukherjee, proposal advisor
Faculty, students, and the general public are invited.
