IDS NLP SIG Research Seminar

Abstract

We are excited to bring together experts in the field to share their insights and recent developments in Large Language Models (LLMs).

Date
Nov 3, 2023 10:00 AM — 12:00 PM
Location
Conference Room, First Floor, Innovation 4.0, NUS (3 Research Link, Singapore 117602)

Talk Title: Towards robustness of AI models for Natural Language Processing

Abstract: AI models, especially Large Language Models have revolutionized the field of Natural Language Processing by achieving state-of-the-art results in a wide range of tasks. However, their widespread adoption has also raised concerns about their vulnerabilities to various adversarial attacks. In this talk, I will introduce our recent efforts in exploring the robustness of current AI architectures for NLP and how to defend them against adversarial attacks.

Speaker: Anh Tuan Luu (https://tuanluu.github.io/), the Assistant Professor at School of Computer Science and Engineering, NTU. Before that, he was a postdoctoral fellow at MIT CSAIL and a member of the NLP group under the supervision of Prof. Regina Barzilay from 2018 to 2020. He obtained Ph.D. degree in computer engineering from NTU in 2016. From 2016 to 2018, he was a research scientist at Institute for Infocomm Research, Singapore in the Natural Language Processing group of Dr. Jian Su. His research interests lie in the intersection of Artificial Intelligence and Natural Language Processing with the focus on applications of pretrained language models, semantics, question & answering, information extraction and knowledge construction, robustness & trustworthy AI, and recommendation systems.


Talk title: Do LLMs Excel in Every Aspect? Overcoming Limitations through Scalable Learning

Abstract: The advent of large-scale language models like ChatGPT and GPT-4 has revolutionized our understanding and capabilities. However, the challenges of hallucination, scalability, and adaptability remain. This talk presents three of our research efforts that address these challenges. First, I will discuss a framework to distill commonsense knowledge from large language models, enabling commonsense reasoning at scale. Second, I will introduce a general-purpose commonsense verification language model that surpasses the performance of its larger counterparts, thereby offering a more resource-efficient yet effective alternative. Finally, I will explore the concept of in-context adaptation learning for smaller-scale language models, demonstrating their ability to generalize across different domains on some typical NLP tasks. In the end, I will discuss remaining challenges and future research directions.

Speaker: Wenya Wang (https://personal.ntu.edu.sg/wangwy/). She is an Assistant Professor with the school of Computer Science and Engineering at NTU. Prior to joining NTU, she worked with Noah Smith and Hanna Hajishirzi as a Postdoc in Paul G. Allen School of Computer Science and Engineering at the University of Washington. She completed my PhD under the supervision of Sinno Jialin Pan at NTU. Her main research interests lie in reasoning for Natural Language Processing (or multimodal learning), including logic reasoning, commonsense reasoning, knowledge integration etc.


Event Details:

  • Date: Friday, 3rd November
  • Time: 10:00 AM
  • Venue: Conference Room, First Floor, Innovation 4.0, NUS (3 Research Link, Singapore 117602)

We are eagerly anticipating this enriching seminar and hope to see you there for a morning of discussions and networking. Following the seminar, we will also be hosting an NLP SIG workshop(in the morning of 17th Nov, Friday), showcasing around 10 papers recently accepted by prestigious conferences. Please stay tuned for this upcoming event!