We are excited to bring together experts in the field to share their insights and recent developments in Large Language Models (LLMs).
We are delighted to share that our NLP SIG seminar on Large Language Models (LLMs), held on the 3rd of November, was a notable gathering, underscored by the presence of two luminaries in the field.
Anh Tuan Luu offered a deep dive into the resilience of AI models in NLP, spotlighting pioneering work in protecting AI infrastructures against adversarial encroachments to bolster their reliability and trustworthiness.
Wenya Wang confronted the inherent challenges of hallucinations, scalability, and adaptability in modern LLMs like GPT-4. Her avant-garde research introduces more sophisticated, capable language models primed for commonsense reasoning and swift adaptation across a spectrum of NLP tasks.
The talks were followed by a vibrant interactive Q&A session, allowing attendees to engage directly with the speakers, fostering a rich exchange of ideas and clarifying complex topics in real-time.