Main Conference Review Process
NAACL 2022 invited the submission of long and short papers on all aspects of Computational Linguistics and Natural Language Processing (NLP). Our paper review process was organized in a hierarchical structure similar to recent years. We recruited 62 senior area chairs (SACs) for 26 areas, following the areas defined for NAACL 2021. There were two paths for submitting papers: special theme papers were directly submitted to the NAACL OpenReview site, and other main conference papers were reviewed through a new ACL-wide centralized reviewing process.
Special Theme Submissions
We highlighted “Human-Centered Natural Language Processing” as the special theme for the conference. As NLP applications increasingly mediate people’s lives, it is crucial to understand how the design decisions made throughout the NLP research and development lifecycle impact people, whether there are users, developers, data providers or other stakeholders. For NAACL 2022, we invited submissions that address research questions that meaningfully incorporate stakeholders in the design, development, and evaluation of NLP resources, models and systems. We particularly encouraged submissions that bring together perspectives and methods from NLP and Human-Computer Interaction. Given their interdisciplinary nature, theme papers were reviewed through a dedicated process by reviewers with expertise in NLP and in Human-Computer Interaction. We received 52 submissions to the special theme, of which 14 have been accepted to appear at the conference.
ACL Rolling Review Submissions
In coordination with the ACL 2022 organizers, we experimented with the ACL Rolling Review (ARR) introduced as part of an initiative to improve efficiency and turnaround of reviewing for ACL conferences. Within this system, reviewing and acceptance of papers to publication venues was done in a two-step process: (1) centralized rolling review via ARR, where submissions receive reviews and meta-reviews from ARR reviewers and action editors; (2) commitment to a publication venue (e.g., NAACL 2022), so that Senior Area Chairs and Program Chairs make acceptance decisions for a submission based on the ARR reviews and meta-reviews. During the first phase of the review process, we served as guest Editors in Chief for the ACL Rolling Review and worked to ensure that all papers submitted received at least three reviews and one meta-review, while balancing the reviewing load for reviewers and action editors. NAACL SACs acted as guest senior area chairs in the ARR system. They helped monitor review progress and supported the 408 action editors and 3379 reviewers in their work. While the new reviewing mechanism was not as smooth as one could have hoped for, all papers submitted to ARR received at least three reviews and a meta-review, so that authors could decide to commit it to NAACL 2022 if they wanted to.
Once papers were committed to the NAACL OpenReview site, SACs were in charge of making acceptance recommendations per area, taking into account the submission itself, (meta-)reviews, as well as comments to SACs provided by the authors and ethics reviews when applicable.
Ethics Reviews
In coordination with Jesse Dodge, Anna Rogers, Margot Mieskes, Amanda Stent, and the ACL Ethics Committee, we incorporated a “Responsible NLP Research” checklist into the submission process, designed to encourage best research practices in our field, from an ethics and reproducibility perspective. Authors were asked to follow the ACL code of ethics and to fill out the checklist to ensure that best practices are put in place. Reviewers were asked to consult the checklist when deciding whether the paper requires ethics review. Based on input from reviewers and action editors, SACs flagged papers that required an in-depth ethics review, which was handled by a committee of 11 ethics reviewers. The ethics chairs provided guidance and office hours to help SACs decide when ethics review was required. The ethics reviews were integrated into the final acceptance recommendation by SACs and decisions by PCs.
Submission Statistics
The ACL Rolling Review received 196 submissions in December and 1897 in January, which were the two submission deadlines between the ACL and NAACL commitment deadlines. Of these 2103 submissions, 56% (1073) were committed to NAACL 2022 for the senior program committee to make an acceptance decision. We accepted a total of 442 papers (358 long papers and 84 short papers), representing 21.96% of papers submitted to ARR in December and January and to the NAACL special theme, and 41.19% of papers committed to NAACL (including the special theme papers). As a reference point, NAACL-HLT 2021 received 1797 submissions and accepted 477 papers, including 350 long and 127 short, for an overall acceptance rate of 26%.
Additionally, 209 submissions (183 long and 26 short) were accepted for publication in the “Findings of ACL: NAACL 2022” (or Findings for short), an online companion publication for papers that have been assessed by the program committee as solid work with sufficient substance. A total of 5 accepted Findings papers were withdrawn. Findings papers were given the option to be presented as posters during the main conference: 183 took this opportunity and will be presented either in person or virtually.
For reference, here is a full table with all the per-track paper acceptance statistics:
Track | Committed | Accepted | ||||
---|---|---|---|---|---|---|
Long Paper | Short Paper | Total | Long Paper | Short Paper | Total | |
Computational Social Science and Cultural Analytics | 24 | 5 | 29 | 5 | 3 | 8 |
Dialogue and Interactive systems | 74 | 19 | 93 | 30 | 7 | 37 |
Discourse and Pragmatics | 8 | 4 | 12 | 2 | 1 | 3 |
Efficient methods in NLP | 31 | 14 | 45 | 10 | 5 | 15 |
Ethics, Bias, and Fairness | 24 | 8 | 32 | 14 | 4 | 18 |
Information Extraction | 75 | 22 | 97 | 30 | 7 | 37 |
Information Retrieval and Text Mining | 22 | 7 | 29 | 7 | 5 | 12 |
Interpretability and Analysis of Models for NLP | 47 | 13 | 60 | 25 | 5 | 30 |
Language Generation | 38 | 8 | 46 | 19 | 1 | 20 |
Language Grounding to Vision, Robotics and Beyond | 28 | 10 | 38 | 13 | 4 | 17 |
Language Resources and Evaluation | 54 | 9 | 63 | 23 | 3 | 26 |
Linguistic Theories, Cognitive Modeling and Psycholinguistics | 6 | 5 | 11 | 5 | 1 | 6 |
Machine Learning for NLP: Classification and Structured Prediction Models | 31 | 14 | 45 | 15 | 7 | 22 |
Machine Learning for NLP: Language Modeling and Sequence to Sequence Models | 32 | 6 | 38 | 14 | 3 | 17 |
Machine Translation | 36 | 11 | 47 | 19 | 4 | 23 |
Multilinguality | 18 | 7 | 25 | 6 | 3 | 9 |
NLP Applications | 71 | 14 | 85 | 30 | 2 | 32 |
Phonology, Morphology and Word Segmentation | 3 | 2 | 5 | 2 | 1 | 3 |
Question Answering | 56 | 7 | 63 | 21 | 2 | 23 |
Semantics: Lexical Semantics | 7 | 3 | 10 | 2 | 0 | 2 |
Semantics: Sentence-level Semantics and Textua... | 40 | 8 | 48 | 16 | 7 | 23 |
Sentiment Analysis and Stylistic Analysis | 25 | 8 | 33 | 11 | 4 | 15 |
Special Theme | 50 | 0 | 50 | 14 | 0 | 14 |
Speech | 6 | 2 | 8 | 3 | 1 | 4 |
Summarization | 37 | 9 | 46 | 17 | 3 | 20 |
Syntax: Tagging, Chunking, and Parsing | 12 | 3 | 15 | 5 | 1 | 6 |
Total | 855 | 218 | 1073 | 358 | 84 | 442 |
Thanks
In closing, we would like to thank the thousands of people who contributed to this process, starting with all members of the Program Committee:
- The senior area chairs, who were incredibly responsive throughout the reviewing process and patiently helped improve the new reviewing infrastructure.
- ARR action editors and reviewers. Special thanks to those who stepped in at the last minute to serve as emergency reviewers. This was tremendously appreciated!
- The special theme area chair, Jeff Bigham, and all reviewers, with a special note of appreciation for those who contributed their time and expertise even though they do not usually publish in NLP conferences.
- The ethics Chairs, Kai-Wei Chang, Dirk Hovy and Diyi Yang, for designing a process to encourage consistent evaluation of ethical considerations during the review process, and their timely input to ensure the integration of ethics review in acceptance recommendations and decisions.
- The ethics reviewers: Yonatan Bisk, Kevin Bretonnel Cohen, Francien Dechesne, Jack Hessel, Jin-Dong Kim, Anne Lauscher, Dave Lewis, Margot Mieskes, Xanda Schofield, Lyle Ungar, Jingbo Xia.
- The outstanding reviewers and action editors who were nominated by the senior area chairs for writing reviews that were particularly helpful in the decision making process. They are recognized by name in the Proceedings of the conference.
Experimenting with a new reviewing system on the large scale required by NAACL would not have been possible without the following people:
- Amanda Stent and Goran Glavaš, as ARR Editors-in-Chiefs, for their tireless work in support of the ARR December and January cycles.
- Graham Neubig, Dhruv Naik and Nils Dycke, as ARR Tech Team for these two cycles.
- Celeste Martinez Gomez, Melisa Bok, and Nadia L’Bahy, as OpenReview Tech Team.
- Elijah Rippeth for his help coordinating the special theme submissions.