The following speakers have graciously accepted to give invited talks at NAACL 2022.
Title: Shaping Technology with Moral Imagination: Leveraging the Machinery of Value Sensitive Design
Time: Monday, July 11, 9:15 – 10:15
Abstract: Tools and technologies are fundamental to the human condition. They do no less than create and structure the conditions in which we live, express ourselves, enact society, and experience what it means to be human. They are also the result of our moral and technical imaginations. Yet, with our limited view, it is not at all obvious how to design and engineer tools and technology so that they are more likely to support the actions, relationships, institutions, and experiences that human beings care deeply about – a life and society of human flourishing.
Value Sensitive Design (VSD) was developed as an approach to address this challenge from within technical design processes. Drawing on over three decades of work, in this plenary talk I will provide an introduction to value sensitive design foregrounding human values in the technical design process. My remarks will present some of value sensitive design’s core theoretical constructs. Along the way, I’ll provide some examples of applying value sensitive design to robots for healthcare and to bias in computing systems as well as demonstrate one toolkit—The Envisioning Cards—in the context of a design activity.
As time permits, I will turn to a discussion of structure, scale and time: we act within existing structure in the now, from which futures unfold across time and scale. I will unpack these observations and their implications for artificial intelligence and machine learning technologies. Thinking longer-term and systemically, I will bring forward a range of potential challenges and offer some constructive ways forward. My comments will engage individual lives, society writ large, what it means to be human, the planet and beyond.
Please have scratch paper and a pencil handy for the design activity.
Speaker Bio: Batya Friedman is a Professor in the Information School and holds adjunct appointments in the Paul G. Allen School of Computer Science & Engineering, the School of Law, and the Department of Human Centered Design and Engineering at the University of Washington where she co-founded the Value Sensitive Design Lab and the UW Tech Policy Lab. Dr. Friedman pioneered value sensitive design (VSD), an established approach to account for human values in the design of technical systems. Her work in value sensitive design has resulted in robust theoretical constructs, dozens of innovative methods, and practical toolkits such as the Envisioning Cards. Value sensitive design has been widely adopted nationally and internationally where it has been used in architecture, biomedical health informatics, civil engineering, computer security, energy, global health, human-computer interaction, human-robotic interaction, information management, legal theory, moral philosophy, tech policy, transportation, and urban planning, among others. Additionally, value sensitive design is emerging in higher education, government, and industry as a key approach to address computing ethics and responsible innovation. Today, Dr. Friedman is working on open questions in value sensitive design including multi-lifespan design, and designing for and with non-human stakeholders – questions critical for the wellbeing of human societies and the planet.
Dr. Friedman’s 2019 MIT Press book co-authored with David Hendry, Value Sensitive Design: Shaping Technology with Moral Imagination, provides a comprehensive account of value sensitive design. In 2012 Dr. Friedman received the ACM-SIGCHI Social Impact Award and the University Faculty Lecturer award at the University of Washington, in 2019 she was inducted into the CHI Academy, in 2020 she received an honorary doctorate from Delft University of Technology, and in 2021 she was recognized as an ACM Fellow. She is also a stone sculptor and mixed media artist. Dr. Friedman received both her B.A. and Ph.D. from the University of California at Berkeley.
Title: NLP in Mexican Spanish: One of many stories
Time: Wednesday, July 13, 16:15 – 17:15
Abstract: Spanish is one of the most widely spoken languages in the world, however, the development of language technologies for it has not been in the same proportion. This is particularly true for some of its Latin American variants, such as the Mexican Spanish. This talk will focus on presenting the development of NLP for Mexican Spanish, emphasizing one of its many research stories related to the analysis of social media content.
This talk will present some data on the languages spoken in Mexico and on the development of the area of Natural Language Processing in our country, and will describe a research project that combined the efforts of several groups: the identification of abusive language in Mexican tweets. The talk will conclude by exposing some calls for collaboration, with the intention of increasing and improving the research in Mexican Spanish as well as in the many indigenous languages spoken in Mexico.
Speaker Bio: Manuel Montes-y-Gómez is Full Professor at the National Institute of Astrophysics, Optics and Electronics (INAOE) of Mexico. His research is on automatic text processing. He is author of more than 250 journal and conference papers in the fields of information retrieval, text mining and authorship analysis.
He has been visiting professor at the Polytechnic University of Valencia (Spain), and the University of Alabama (USA). He is also a member of the Mexican Academy of Sciences (AMC), and founding member of the Mexican Academy of Computer Science (AMEXCOMP), the Mexican Association of Natural Language Processing (AMNLP), and of the Language Technology Network of CONACYT. In the context of them, he has been the organizer of the National Workshop on Language Technologies (from 2004 to 2016), the Mexican Workshop on Plagiarism Detection and Authorship Analysis (2016-2020), the Mexican Autumn School on Language Technologies (2015 and 2016), and a shared task on author profiling, aggressiveness analysis and fake news detection in Mexican Spanish at IberLEF (2018-2021).
Panel: The Place of Linguistics and Symbolic Structures
Time: Tuesday, July 12, 9:15 – 10:15
The widespread adoption of neural models in NLP research and the fact that NLP applications increasingly mediate people’s lives have prompted many discussions about what productive research directions might look like for our community. Since NAACL is a meeting of a chapter of the Association for Computational Linguistics, we would like to highlight specifically the role that linguistics and symbolic structures can play (or not) in shaping these research directions.
Moderator: Dan Roth, University of Pennsylvania & AWS AI Labs
Dan Roth is the Eduardo D. Glandt Distinguished Professor at the Department of CIS, UPenn, the NLP Lead at AWS AI, and a Fellow of the AAAS, ACM, AAAI, and ACL. In 2017 Roth received the John McCarthy Award. Roth has published broadly in ML, NLP, KRR, and learning theory, and has given keynote talks and tutorials in all ACL and AAAI major conferences. Roth was the Editor-in-Chief of JAIR until 2017, and the program chair of AAAI’11, ACL’03 and CoNLL’02.
Panelist: Emily M. Bender, University of Washington
Emily M. Bender is a Professor of Linguistics at the University of Washington and the Faculty Director of UW’s Professional Master’s in Computational Linguistics. Her research interests include computational semantics, multilingual grammar engineering, the interplay between linguistics and NLP, and societal impacts of language technology. She is the author of two books which present linguistic concepts in a manner accessible to NLP practitioners: Linguistic Fundamentals for Natural Language Processing: 100 Essentials from Morphology and Syntax (2013) and Linguistic Fundamentals for Natural Language Processing II: 100 Essentials from Semantics and Pragmatics (2019; with Alex Lascarides), as well as the co-author of recent influential papers such as Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data (ACL 2020) and On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜 (FAcct 2021).
Panelist: Dilek Hakkani-Tür, Amazon Alexa AI
Dilek Hakkani-Tür is a senior principal scientist at Amazon Alexa AI, focusing on enabling natural dialogues with machines. Prior to joining Amazon, she was a researcher at Google, Microsoft Research, International Computer Science Institute at UC Berkeley and AT&T Labs-Research. Her research interests include conversational AI, natural language and speech processing, spoken dialogue systems, and machine learning for language processing. She received best paper awards for publications she co-authored on conversational systems from IEEE Signal Processing Society, ISCA and EURASIP. Recently, she served as a program chair for NAACL 2020, the editor-in-chief of IEEE Transactions on Audio, Speech, and Language Processing and an IEEE Distinguished Industry Speaker. She is a fellow of ISCA and IEEE.
Panelist: Chitta Baral, Arizona State University
Chitta Baral is a Professor in the School of Computing and AI at Arizona State University. His research interests include Knowledge Representation and Reasoning (KR & R), Natural Language Understanding (NLU), Image/Video Understanding; and their applications to Molecular Biology, Health Informatics and Robotics. Chitta is the author of the book “Knowledge Representation, Reasoning and Declarative Problem Solving” and a past President of KR Inc. His current research focus is on leveraging decades of research in KR & R for better understanding of natural language and images/videos. Towards that end he has worked on a framework for translating natural language to formal representations (NL2KR); abducing missing knowledge and knowledge hunting; exploring NLU challenges where reasoning with knowledge, reasoning about actions, and commonsense reasoning are crucial; exploring the use of natural language as a knowledge representation and instructional formalism; and exploring the role of reasoning and knowledge in enhancing generalizability, robustness, and few-shot learning.
Panelist: Christopher D. Manning, Stanford University
Christopher Manning is a professor of linguistics and computer science at Stanford University, Director of the Stanford Artificial Intelligence Lab (SAIL), and an Associate Director of the Stanford Institute for Human-Centered AI (HAI). He is a leader in applying deep neural networks to natural language processing (NLP), including work on neural machine translation, tree-recursive models, natural language inference, summarization, parsing, question answering, and the GloVe word vectors. Manning founded the Stanford NLP group (@stanfordnlp), teaches and has co-written textbooks for NLP (CS 224N) and information retrieval (CS 276), co-developed Stanford Dependencies and Universal Dependencies, manages development of the Stanford CoreNLP and Stanza software, is the most-cited researcher in NLP, and is an ACM, AAAI, and ACL Fellow and a Past President of ACL.
Panel: Careers in NLP Panel
Time: Monday, July 11, 13:15 – 14:15
The Careers in NLP Panel is a standing feature of NAACL Industry Track. The panel is addressed to graduate students and junior researchers as well as their supervisors and mentors, although all NAACL participants are welcomed. The panellists will discuss the diversity of career paths in NLP: from more research-oriented NLP scientist roles to careers in product.
Moderator: Yunyao Li, Apple Knowledge Platform
Yunyao Li is the Head of Machine Learning, Apple Knowledge Platform, where her team builds the next-generation machine learning solutions to help power features such as Siri and Spotlight. Previously she was a Distinguished Research Staff Member and Senior Research Manager at IBM Research - Almaden. She is particularly known for her work in scalable NLP, enterprise search, and database usability. She has built systems, developed solutions, and delivered core technologies to over 20 IBM products under brands such as Watson, InfoSphere, and Cognos. She has published over 80 articles with multiple awards and a book. She was an IBM Master Inventor, with over 50 patents filed/granted. She is an ACM Distinguished Member. She was a member of the inaugural New Voices program of the US National Academies (1 out of 18 selected nationwide) and represented US young scientists at World Laureates Forum Young Scientists Forum in 2019 (1 of 4 selected nationwide).
Panelist: Yang Liu, Amazon, Alexa AI
Yang Liu is currently a principal scientist at Amazon, Alexa AI. Her research interest is in speech and language processing. She received her BS and MS from Tsinghua University, and Ph.D. from Purdue University. Before joining Amazon, she was the head of LAIX Silicon Valley AI lab, a research scientist at Facebook, visiting scientist at Google, a faculty member at the University of Texas at Dallas, and researcher at ICSI in Berkeley. She received NSF CAREER award and Air Force Young Investigator Program award. She is currently a member of the IEEE SLTC committee, a senior area editor for IEEE/ACM Transactions on Audio, Speech and Language Processing, an action editor for TACL. She was one of the program chairs for EMNLP 2020, and has served regularly as an area chair and reviewer in the past NLP conferences. She is a fellow of IEEE and ISCA.
Panelist: Timo Mertens, Grammarly
Timo Mertens is the Head of Machine Learning & NLP Products at Grammarly. In his role, he oversees the teams that design and build products that use machine learning and natural language processing. These technologies empower Grammarly to offer a digital writing assistant that helps millions of users write more clearly and effectively every day. Timo has focused on the intersection between machine learning and delivering impactful products throughout his career, spanning academia—with a Ph.D. in Speech Recognition—and industry, where he’s held product leadership positions across Microsoft, Google, and Dropbox.
Panelist: Thamar Solorio, University of Houston and Bloomberg LP
Thamar Solorio is a Professor of Computer Science at the University of Houston (UH) and she is also a visiting scientist at Bloomberg LP. She holds graduate degrees in Computer Science from the Instituto Nacional de Astrofísica, Óptica y Electrónica, in Puebla, Mexico. Her research interests include information extraction from social media data, enabling technology for code-switched data, stylistic modelling of text, and more recently multimodal approaches for online content understanding. She is the director and founder of the Research in Text Understanding and Language Analysis Lab at UH. She is the recipient of an NSF CAREER award for her work on authorship attribution, and recipient of the 2014 Emerging Leader ABIE Award in Honor of Denice Denton. She is currently serving a second term as an elected board member of the North American Chapter of the Association of Computational Linguistics.
Panelist: Luke Zettlemoyer, University of Washington and Meta
Luke Zettlemoyer is a Professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, and a Research Scientist at Meta. His research focuses on empirical methods for natural language semantics, and involves designing machine learning algorithms, introducing new tasks and datasets, and, most recently, studying how to best develop self-supervision signals for pre-training. His honors include being named an ACL Fellow as well as winning a PECASE award, an Allen Distinguished Investigator award, and multiple best paper awards.