NLP SIG Showcase – Nov 2024 Edition


Date
Nov 4, 2024 8:30 AM — 3:00 PM
Location
Seminar Room, Innovation 4.0, NUS (3 Research Link, Singapore 117602)

We are pleased to share the highlights from the recent IDS NLP SIG Workshop, where cutting-edge papers published by IDS students and researchers at top-tier conferences were showcased. This event provided a unique opportunity for attendees to delve into the latest advancements in the fields of Natural Language Processing (NLP) and Machine Learning (ML), and to learn more about the ongoing research activities at IDS.

The workshop featured two industry speakers, both IDS and SOC alumni, from TikTok and Salesforce, respectively. They presented the latest research developments in the industry, offering valuable insights into practical applications and emerging trends in NLP and ML.

In addition to the keynote presentations, the workshop showcased a wide range of recent research works on the latest topics in Large Language Models (LLMs) and ML through poster presentations. Attendees had the opportunity to interact with students and researchers on a one-on-one basis, fostering engaging discussions and potential collaborations.

We extend our gratitude to all participants for making this event a success and look forward to future workshops that continue to push the boundaries of NLP and ML research.

Event Details

  • Date: Monday, 4th November
  • Time: 9:20 AM – 12:30 PM
  • Venue: First Floor, Innovation 4.0, NUS (3 Research Link, Singapore 117602), near COM3 and COM4

Schedule

  • 09:25 – 09:30: Opening Remark
  • 09:30 – 10:00: Efficient Long Video Generation with Story Telling Capabilities, Daquan Zhou, TikTok
  • 10:00 – 10: 30: Advancing Time Series Forecasting: Unified Transformers and Foundation Model with Mixture-of-Experts, Juncheng Liu, Salesforce
  • 10:30 – 10:45: Teabreak
  • 10:45 – 12:15: Poster Presentation

Keynote Talks

Efficient Long Video Generation with Story Telling Capabilities

  • Speaker: Daquan Zhou, TikTok
  • Bio: Daquan Zhou graduated from the National University of Singapore and joined ByteDance as a research scientist in 2022. He is the recipient of the Singapore Data Science Consortium (SDSC) PhD Dissertation Award in 2021. Previously, he contributed to the development of Singapore’s first commercial artificial satellite (2016-2018). His paper, “Coordinate Attention for Efficient Mobile Network Design,” is currently ranked fifth on the CVPR 2021 Most Influential List. His work on robustness, “Fully Attentional Network,” was used as the foundation for the winning solution in five segmentation tracks of the 2022 Visual Robustness Challenge and was integrated as a base model into Nvidia’s Developer TAO Toolkit.
  • Abstract: Video generation has been a hot research area in recent years. However, generating realistic, continuous, and long videos remains a challenging problem in the field. This talk explores how to design an efficient video generation architecture with temporal continuity and the ability to express a complete storyline from three perspectives: dataset generation, video generation model algorithm design, and computational overhead.

Advancing Time Series Forecasting: Unified Transformers and Foundation Model with Mixture-of-Experts

  • Speaker: Juncheng Liu, Salesforce
  • Bio: Juncheng is currently a Research Scientist at Salesforce AI Research. He obtained PhD in Computer Science at School of Computing (SoC), National University of Singapore (NUS), advised by Prof. Xiaokui Xiao. His research interests are time series foundation models, graph learning, and related applications. He is the recipient of 2022 ACM SIGMOD Research Highlight Award and Best Research Paper Award in VLDB 2021. -Abstract: This talk will focus on our two recent works on time series forecasting at Salesforce. The first, UniTST, introduces a unified Transformer model that captures both variate and temporal dependencies, offering strong performance with a simple architecture. The second, Moirai-MoE, builds on our previous time series foundation model Moirai. It is the first mixture-of-experts model for time series, achieving token-level specialization and outperforming existing models with fewer parameters.

Poster Presentations

  • Encoding and Controlling Global Semantics for Long-form Video Question Answering. Thong Nguyen, et al. EMNLP 2024
  • Aligning Translation-Specific Understanding to General Understanding in Large Language Models. Yichong Huang, et al. EMNLP 2024
  • MAgIC: Investigation of Large Language Model Powered Multi-Agent in Cognition, Adaptability, Rationality and Collaboration. Lin Xu, et al. EMNLP 2024
  • One-Shot Sequential Federated Learning for Non-llD Data by Enhancing Local Model Diversity. Naibo Wang, et al. ACM MM 2024.
  • Towards Effective Federated Graph Anomaly Detection via Self-boosted Knowledge Distillation. Jinyu Cai, et al. ACM MM 2024.
  • Learning the Unlearned: Mitigating Feature Suppression in Contrastive Learning. Xiang Lan, et al. ECCV 2024.
  • Meta-optimized Angular Margin Contrastive Framework for Video-Language Representation Learning. Thong Nguyen, et al. ECCV 2024
  • Uncertainty of Thoughts: Uncertainty-Aware Planning Enhances Information Seeking in Large Language Models. Zhiyuan Hu, et al. NeurIPS 2024.
  • Localized zeroth-order prompt optimization. Wenyang Hu, et al. NeurIPS 2024.
  • An I/O efficient scheme for heterogeneous and distributed LLM operation. Yiqi Zhang, et al. NeurIPS 2024.
  • Ensemble Learning for Heterogeneous Large Language Models with Deep Parallel Collaboration. Yichong Huang, et al. NeurIPS 2024.
  • Mercury: A Code Efficiency Benchmark for Code Large Language Models. Mingzhe Du, et al. NeurIPS 2024.
  • Federated Transformer: Multi-Party Vertical Federated Learning on Practical Fuzzily Linked Data. Zhaomin Wu, et al. NeurIPS 2024.