Nov 10, 2021 | Our workshop has passed successfully. We would like to thank all authors, reviewers, steering committee, keynote speakers, sponsors, and participants for making this workshop fantastic. Hope to see you all again in the 3rd Eval4NLP workshop. |
---|---|
Nov 10, 2021 | The list of best paper awards has been announced. Congratulations! |
Nov 08, 2021 | The list of keynote speakers and their talk abstracts has been added to our Program. |
Oct 28, 2021 | The program of the workshop has been published. |
Oct 10, 2021 | The list of accepted papers has been published. |
Aug 31, 2021 | As the CodaLab competition system is unstable, the organizers have decided to extend the submission deadline of the shared task to September 3, 2021. |
Aug 20, 2021 | The test phase of our shared task begins now! The test data is here. Please don't forget to join our Google Group for latest updates. |
Jul 25, 2021 | The submission deadline of research papers has been extended to July 31, 2021. |
Jul 20, 2021 | The Multiple Submission Policy and Presenting Published Papers sections in our call for papers have been updated. |
Jun 30, 2021 | The CodaLab competition of our shared task is now live! |
Jun 16, 2021 | Please join this Google Group for posting questions related to our shared task. |
Jun 10, 2021 | The baseline and the annotation guidelines of our shared task are now available. |
May 24, 2021 | We announce the shared task on "Explainable Quality Estimation". |
May 22, 2021 | The submission system is now open! More details about preprints and supplementary materials are added to the Call for Papers. |
May 14, 2021 | We also welcome submissions from ACL Rolling Review. |
Apr 22, 2021 | The Call for Papers is out! |
Apr 17, 2021 | The Artificial Intelligence Journal (AIJ) and Salesforce are our generous sponsors this year. |
Nov 19, 2020 | Launch the workshop website |
Fair evaluations and comparisons are of fundamental importance to the NLP community to properly track progress, especially within the current deep learning revolution, with new state-of-the-art results reported in ever shorter intervals. This concerns the creation of benchmark datasets that cover typical use cases and blind spots of existing systems, the designing of metrics for evaluating the performance of NLP systems on different dimensions, and the reporting of evaluation results in an unbiased manner.
Although certain aspects of NLP evaluation and comparison have been addressed in previous workshops (e.g., Metrics Tasks at WMT, NeuralGen, NLG-Evaluation, and New Frontiers in Summarization), we believe that new insights and methodology, particularly in the last 1-2 years, have led to much renewed interest in the workshop topic. The first workshop in the series, Eval4NLP’20 (collocated with EMNLP’20), was the first workshop to take a broad and unifying perspective on the subject matter. We believe the second workshop will continue the tradition and become a reputed platform for presenting and discussing latest advances in NLP evaluation methods and resources.
Particular topics of interest of the workshop include (but not limited to):
See reference papers here.
HumEval invites submissions on all aspects of human evaluation of NLP systems.
Email: eval4nlp@gmail.com