As NLP applications increasingly mediate people’s lives, it is crucial to understand how the design decisions made throughout the NLP research and development lifecycle impact people, whether they are users, developers, data providers or other stakeholders. For NAACL 2022, we invite submissions that address research questions that meaningfully incorporate stakeholders in the design, development, and evaluation of NLP resources, models and systems. We particularly encourage submissions that bring together perspectives and methods from NLP and Human-Computer Interaction. In addition to papers presenting research studies, we invite survey and position papers that take stock of past work in human-centered NLP and propose directions for framing future research.
Topics of interest include (but are not limited to): usability studies of language technologies; needs-findings studies; studies of human factors in the NLP R&D lifecycle, including interactive systems; human-centered fairness, accountability, explainability, transparency, and ethics in NLP systems; or human-centered evaluations of NLP technologies.
Relevant methods include (but are not limited to) user-centered design, value-sensitive design, participatory design, assets-based design, and qualitative methods, such as grounded theory. We welcome contributions that use such methods to study NLP problems, as well as methodological innovations and tools that tailor these methods to NLP.
What is the difference between a theme paper and a regular paper?
Fundamentally, the theme track focuses on papers that center people in the research questions asked and the methods used to address them. Submissions are encouraged to explicitly make the case for how the paper addresses the theme. We give a few examples below to help authors select the most appropriate track for their submission:
- A paper on semantic parsing might be motivated by the desire to provide tools that support user information needs, but this alone does not align the paper with the special theme. If the submission contributes a new modeling or training technique that is evaluated on standard corpus-based benchmarks, it would best fit the regular track. However, if the submission evaluates, e.g., how users calibrate their trust in the predictions of the system, it would be a good fit for the theme track.
- A regular track paper could contribute a machine translation benchmark involving under-studied languages, and be motivated by the need to improve communication across language barriers for speakers of these languages. It would be in scope for the special theme if, e.g., it takes a participatory design approach to collecting data sources and evaluation strategies.
- A paper contributing a technique for explaining the predictions of an NLP system would be in scope if it directly studies how explanations impact how users perceive and interact with the system.
As examples, here are some papers published in the ACL anthology or at other venues that align with the special theme:
- Keep It Simple: Unsupervised Simplification of Multi-Paragraph Text Philippe Laban, Tobias Schnabel, Paul Bennett, Marti A. Hearst. ACL-IJCNLP 2021.
- STORIUM: A Dataset and Evaluation Platform for Machine-in-the-Loop Story Generation Nader Akoury, Shufan Wang, Josh Whiting, Stephen Hood, Nanyun Peng, Mohit Iyyer. EMNLP 2020.
- Automatic Text Simplification Tools for Deaf and Hard of Hearing Adults: Benefits of Lexical Simplification and Providing Users with Autonomy Oliver Alonzo, Matthew Seita, Abraham Glasser, Matt Huenerfauth. CHI 2020.
- Unmet Needs and Opportunities for Mobile Translation AI Daniel J. Liebling, Michal Lahav, Abigail Evans, Aaron Donsbach, Jess Holbrook, Boris Smus, Lindsey Boran. CHI 2020.
- User-centered & Robust Open-source Software: Lessons Learned from Developing & Maintaining RSMTool Nitin Madnani and Anastassia Loukina. NLP-OSS @ ACL 202.
- Participatory Research for Low-resourced Machine Translation: A Case Study in African Languages Wilhelmina Nekoto, Vukosi Marivate, Tshinondiwa Matsila, Timi Fasubaa, Taiwo Fagbohungbe, Solomon Oluwole Akinola, Shamsuddeen Muhammad, Salomon Kabongo Kabenamualu, Salomey Osei, Freshia Sackey, Rubungo Andre Niyongabo, Ricky Macharm, Perez Ogayo, Orevaoghene Ahia, Musie Meressa Berhe, Mofetoluwa Adeyemi, Masabata Mokgesi-Selinga, Lawrence Okegbemi, Laura Martinus, Kolawole Tajudeen, Kevin Degila, Kelechi Ogueji, Kathleen Siminyu, Julia Kreutzer, Jason Webster, Jamiil Toure Ali, Jade Abbott, Iroro Orife, Ignatius Ezeani, Idris Abdulkadir Dangana, Herman Kamper, Hady Elsahar, Goodness Duru, Ghollah Kioko, Murhabazi Espoir, Elan van Biljon, Daniel Whitenack, Christopher Onyefuluchi, Chris Chinenye Emezue, Bonaventure F. P. Dossou, Blessing Sibanda, Blessing Bassey, Ayodele Olabiyi, Arshath Ramkilowan, Alp Öktem, Adewale Akinfaderin, Abdallah Bashir. Findings of EMNLP 2020.
- Effects of Persuasive Dialogues: Testing Bot Identities and Inquiry Strategies Weiyan Shi, Xuewei Wang, Yoo Jung Oh, Jingwen Zhang, Saurav Sahay, Zhou Yu. CHI 2020.
- If I Hear You Correctly: Building and Evaluating Interview Chatbots with Active Listening Skills Ziang Xiao, Michelle X. Zhou,Wenxi Chen,Huahai Yang,Changyan Chi. CHI 2020.
- Creative Writing with a Machine in the Loop: Case Studies on Slogans and Stories Elizabeth Clark, Anne Spencer Ross, Chenhao Tan, Yangfeng Ji, & Noah A. Smith. Intelligent User Interfaces, 2018.
- Closing the Loop: User-Centered Design and Evaluation of a Human-in-the-Loop Topic Modeling System Alison Smith, Varun Kumar, Jordan Boyd-Graber, Kevin Seppi, Leah Findlater. Intelligent User Interfaces, 2018.
- Sensing and Learning Human Annotators Engaged in Narrative Sensemaking McKenna Tornblad, Luke Lapresi, Christopher Homan, Raymond Ptucha, Cecilia Ovesdotter Alm. NAACL SRW 2018.
- A Spellchecker for Dyslexia Luz Rello, Miguel Ballesteros, Jeffrey P. Bigham. ASSETS’15.
- Personal storytelling: Using Natural Language Generation for children with complex communication needs, in the wild… Nava Tintare, Ehud Reiter, Rolf Black, Annalu Waller, Joe Reddington. International Journal of Human-Computer Studies. 2015.
What is the review process?
Special theme papers will be reviewed through a dedicated process. Submissions will be accepted on the NAACL OpenReview site until the January 15 Anywhere on Earth deadline. Reviewing will be handled by area chairs and reviewers with expertise spanning NLP and HCI. The review form for theme submission will also be separate from the one used to evaluate regular papers.
What are the presentation forms?
Accepted papers will be presented either orally or as a poster. We anticipate having special sessions dedicated to the theme.
Will there be other events related to this theme at the conference?
We are working on other theme-related events at NAACL 2022. These may include invited talks, tutorials, etc. Tutorial proposals should be submitted to the Joint Call for Tutorials for 2022 conferences. You are welcome to reach out to the PCs with ideas and suggestions (firstname.lastname@example.org).