Please see this blog post for more information!

Schedule

Date: July 10, 2022

  Morning sessions: 9:00–12:30
Extra Q&A sessions: 8:00–8:45 and 12:30–13:00
T1 Text Generation with Text-Editing Models
T2 Self-supervised Representation Learning for Speech Processing
T3 New Frontiers of Information Extraction
  Afternoon sessions: 14:00–17:30
Extra Q&A sessions: 13:30–14:00 and 18:00–18:45
T4 Human-Centered Evaluation of Explanations
T5 Multimodal Machine Learning
T6 Contrastive Data and Learning for Natural Language Processing

All times are Pacific Daylight Time (GMT-7).

Tutorials will be delivered live in a hybrid mode. Room assignments and Zoom links for the tutorials, and for their extra live Q&A sessions, will be in the conference handbook and on the website for virtual attendance, which will link to materials including videos. Remote attendees who cannot connect synchronously to tutorials can attend the extra Q&A sessions.

T1: Text Generation with Text-Editing Models

Eric Malmi, Yue Dong, Jonathan Mallinson, Aleksandr Chuklin, Jakub Adamek, Daniil Mirylenka, Felix Stahlberg, Sebastian Krause, Shankar Kumar, Aliaksei Severyn

Eric Malmi Yue Dong Jonathan Mallinson Aleksandr Chuklin Jakub Adamek Daniil Mirylenka Felix Stahlberg Sebastian Krause Shankar Kumar Aliaksei Severyn

Speaker Bios

Time: 9:00–12:30

Extra Q&A sessions: 8:00–8:45 and 12:30–13:00

Location: Columbia A

Category: Cutting-edge

Text-editing models have recently become a prominent alternative to seq2seq models for monolingual text-generation tasks such as grammatical error correction, text simplification, and style transfer. These tasks share a common trait – they exhibit a large amount of textual overlap between the source and target texts. Text-editing models take advantage of this observation and learn to generate the output by predicting edit operations applied to the source sequence. In contrast, seq2seq models generate outputs word-by-word from scratch thus making them slow at inference time. Text-editing models provide several benefits over seq2seq models including faster inference speed, higher sample efficiency, and better control and interpretability of the outputs. This tutorial provides a comprehensive overview of the text-edit based models and current state-of-the-art approaches analyzing their pros and cons. We discuss challenges related to deployment and how these models help to mitigate hallucination and bias, both pressing challenges in the field of text generation.

T2: Self-supervised Representation Learning for Speech Processing

Hung-yi Lee, Abdelrahman Mohamed, Shinji Watanabe, Tara Sainath, Karen Livescu, Shang-Wen Li, Shu-wen Yang, Katrin Kirchhoff

Hung-yi Lee Abdelrahman Mohamed Shinji Watanabe Tara Sainath Karen Livescu Shang-Wen Li Shu-wen Yang Katrin Kirchhoff

Speaker Bios

Time: 9:00–12:30

Extra Q&A sessions: 8:00–8:45 and 12:30–13:00

Location: Columbia C

Category: Cutting-edge

Although Deep Learning models have revolutionized the speech and audio processing field, they forced building specialist models for individual tasks and application scenarios. Deep neural models also bottlenecked dialects and languages with limited labeled data. Self-supervised representation learning methods promise a single universal model to benefit a collection of tasks and domains. They recently succeeded in NLP and computer vision domains, reaching new performance levels while reducing required labels for many downstream scenarios. Speech representation learning is experiencing similar progress with three main categories: generative, contrastive, predictive. Other approaches relied on multi-modal data for pre-training, mixing text or visual data streams with speech. Although self-supervised speech representation is still a nascent research area, it is closely related to acoustic word embedding and learning with zero lexical resources. This tutorial session will present self-supervised speech representation learning approaches and their connection to related research areas. Since many of the current methods focused solely on automatic speech recognition as a downstream task, we will review recent efforts on benchmarking learned representations to extend the application of such representations beyond speech recognition. A hands-on component of this tutorial will provide practical guidance on building and evaluating speech representation models.

T3: New Frontiers of Information Extraction

Muhao Chen, Lifu Huang, Manling Li, Ben Zhou, Heng Ji, Dan Roth

Muhao Chen Lifu Huang Manling Li Ben Zhou Heng Ji Dan Roth

Speaker Bios

Time: 9:00–12:30

Extra Q&A sessions: 8:00–8:45 and 12:30–13:00

Location: Columbia D

Category: Cutting-edge

Information extraction (IE) is the process of automatically extracting structural information from unstructured or semi-structured data. It provides the essential support for natural language understanding by recognizing and resolving the concepts, entities, events described in text, and inferring the relations among them. In various application domains, IE automates the costly acquisition process of domain-specific knowledge representations that have been the backbone of any knowledge-driven AI systems. For example, automated knowledge base construction has relied on technologies for entity-centric IE. Extraction of events and event chains assists machines with narrative prediction and summarization tasks. Medical IE also benefits important but expensive clinical tasks such as drug discovery and repurposing. Despite the importance, frontier research in IE still face several key challenges. The first challenge is that existing dominant methods using language modeling representation cannot sufficiently capture the essential knowledge and structures required for IE tasks. The second challenge is on the development of extraction models for fine-grained information with less supervision, considering that obtaining structural annotation on unlabeled data have been very costly. The third challenge is to extend the reliability and generalizability of IE systems in real-world scenarios, where data sources often contain incorrect, invalid or unrecognizable inputs, as well as inputs containing unseen labels and mixture of modalities. Recently, by tackling those critical challenges, recent literature is leading to transformative advancement in principles and methodologies of IE system development. We believe it is necessary to present a timely tutorial to comprehensively summarize the new frontiers in IE research and point out the emerging challenges that deserve further investigation.

In this tutorial, we will systematically review several lines of frontier research on developing robust, reliable and adaptive learning systems for extracting rich structured information. Beyond introducing robust learning and inference methods for unsupervised denoising, constraint capture and novelty detection, we will discuss recent approaches for leveraging indirect supervision from natural language inference and generation tasks to improve IE. We will also review recent minimally supervised method for training IE models with distant supervision from linguistic patterns, corpus statistics or language modeling objectives. In addition, we will illustrate how a model trained on a close domain can be reliably adapted to produce extraction from data sources in different domains, languages and modalities, or acquiring global knowledge to guide the extraction on a highly diverse open label space. Participants will learn about recent trends and emerging challenges in this topic, representative tools and learning resources to obtain ready-to-use models, and how related technologies benefit end-user NLP applications.

T4: Human-Centered Evaluation of Explanations

Jordan Boyd-Graber, Samuel Carton, Shi Feng, Vera Liao, Tania Lombrozo, Alison Smith-Renner, Chenhao Tan

Jordan Boyd-Graber Samuel Carton Shi Feng Vera Liao Tania Lombrozo Alison Smith-Renner Chenhao Tan

Speaker Bios

Time: 14:00–17:30

Extra Q&A sessions: 13:30–14:00 and 18:00–18:45

Location: Columbia D

Category: Introductory

The NLP community are increasingly interested in providing explanations for NLP models to help people make sense of model behavior and potentially improve human interaction with models. In addition to computational challenges in generating these explanations, evaluations of the generated explanations require human-centered perspectives and approaches. This tutorial will provide an overview of human-centered evaluations of explanations. First, we will give a brief introduction to the psychological foundation of explanations as well as types of NLP model explanations and their corresponding presentation, to provide the necessary background. We will then present a taxonomy of human-centered evaluation of explanations and dive into depth in the two categories: 1) evaluation with human-subject studies and 2) evaluation based on human-annotated explanations. We will conclude by discussing future directions. We will also adopt a flipped format to maximize the interactive components for the live audience.

T5: Multimodal Machine Learning

Louis-Philippe Morency, Paul Pu Liang, Amir Zadeh

Louis-Philippe Morency Paul Pu Liang Amir Zadeh

Speaker Bios

Time: 14:00–17:30

Extra Q&A sessions: 13:30–14:00 and 18:00–18:45

Location: Columbia C

Category: Cutting-edge

Multimodal machine learning is a vibrant multi-disciplinary research field that addresses some of the original goals of AI via designing computer agents that are able to demonstrate intelligent capabilities such as understanding, reasoning and planning through integrating and modeling multiple communicative modalities, including linguistic, acoustic, and visual messages. With the initial research on audio-visual speech recognition and more recently with language & vision projects such as image and video captioning, visual question answering, and language-guided reinforcement learning, this research field brings some unique challenges for multimodal researchers given the heterogeneity of the data and the contingency often found between modalities.

This tutorial builds upon the annual course on multimodal machine learning taught at Carnegie Mellon University and is a completely revised version of the previous tutorials on multimodal learning at CVPR, ACL, and ICMI conferences. The present tutorial is based on a revamped taxonomy of the core technical challenges present in multimodal machine learning, centered around these six core challenges: representation, alignment, reasoning, induction, generation and quantification. Recent technical achievements will be presented through the lens of this revamped taxonomy of multimodal core challenges, allowing researchers to understand similarities and differences between approaches and new models. The tutorial is also designed to give a perspective on future research directions in multimodal machine learning.

T6: Contrastive Data and Learning for Natural Language Processing

Rui Zhang, Yangfeng Ji, Yue Zhang, Rebecca J. Passonneau

Rui Zhang Yangfeng Ji Yue Zhang Rebecca J. Passonneau

Speaker Bios

Time: 14:00–17:30

Extra Q&A sessions: 13:30–14:00 and 18:00–18:45

Location: Columbia A

Category: Cutting-edge

Current NLP models heavily rely on effective representation learning algorithms. Contrastive learning is one such technique to learn an embedding space such that similar data sample pairs have close representations while dissimilar samples stay far apart from each other. It can be used in supervised or unsupervised settings using different loss functions to produce task-specific or general-purpose representations. While it has originally enabled the success for vision tasks, recent years have seen a growing number of publications in contrastive NLP. This first line of works not only delivers promising performance improvements in various NLP tasks, but also provides desired characteristics such as task-agnostic sentence representation, faithful text generation, data-efficient learning in zero-shot and few-shot settings, interpretability and explainability.

In this tutorial, we aim to provide a gentle introduction to the fundamentals of contrastive learning approaches and the theory behind them. We then survey the benefits and the best practices of contrastive learning for various downstream NLP applications including Text Classification, Question Answering, Summarization, Text Generation, Interpretability and Explainability, Commonsense Knowledge and Reasoning, Vision-and-Language.