DE eng

Search in the Catalogues and Directories

Hits 1 – 11 of 11

1
We Need to Consider Disagreement in Evaluation
Basile, Valerio; Fell, Michael; Fornaciari, Tommaso. - : Association for Computational Linguistics, 2021. : country:USA, 2021. : place:Stroudsburg, PA, 2021
BASE
Show details
2
Free the Plural: Unrestricted Split-Antecedent Anaphora Resolution ...
BASE
Show details
3
Phrase Detectives Corpus Version 2
Chamberlain, Jon; Paun, Silviu; Yu, Juntao. - : Linguistic Data Consortium, 2019. : https://www.ldc.upenn.edu, 2019
BASE
Show details
4
Phrase Detectives Corpus Version 2 ...
Chamberlain, Jon; Paun, Silviu; Yu, Juntao. - : Linguistic Data Consortium, 2019
BASE
Show details
5
Crowdsourcing and Aggregating Nested Markable Annotations ...
Madge, Chris; Yu, Juntao; Chamberlain, Jon. - : Universität Regensburg, 2019
BASE
Show details
6
Crowdsourcing and Aggregating Nested Markable Annotations
Madge, Chris; Yu, Juntao; Chamberlain, Jon. - : Association for Computational Linguistics, 2019
BASE
Show details
7
A Crowdsourced Corpus of Multiple Judgments and Disagreement on Anaphoric Interpretation
Paun, Silviu; Uma, Alexandra; Poesio, Massimo; Chamberlain, Jon; Kruschwitz, Udo; Yu, Juntao. - : Association for Computational Linguistics, 2019
Abstract: We present a corpus of anaphoric information (coreference) crowdsourced through a game-with-a-purpose. The corpus, containing annotations for about 108,000 markables, is one of the largest corpora for coreference for English, and one of the largest crowdsourced NLP corpora, but its main feature is the large number of judgments per markable: 20 on average, and over 2.2M in total. This characteristic makes the corpus a unique resource for the study of disagreements on anaphoric interpretation. A second distinctive feature is its rich annotation scheme, covering singletons, expletives, and split-antecedent plurals. Finally, the corpus also comes with labels inferred using a recently proposed probabilistic model of annotation for coreference. The labels are of high quality and make it possible to successfully train a state of the art coreference resolver, including training on singletons and non-referring expressions. The annotation model can also result in more than one label, or no label, being proposed for a markable, thus serving as a baseline method for automatically identifying ambiguous markables. A preliminary analysis of the results is presented.
URL: http://repository.essex.ac.uk/25795/
https://doi.org/10.18653/v1/N19-1176
http://repository.essex.ac.uk/25795/7/N19-1176.pdf
BASE
Hide details
8
Crowdsourcing and Aggregating Nested Markable Annotations
Poesio, Massimo; Yu, Juntao; Chamberlain, Jon. - : Association for Computational Linguistics, 2019
BASE
Show details
9
A Crowdsourced Corpus of Multiple Judgments and Disagreement on Anaphoric Interpretation
Poesio, Massimo; Chamberlain, Jon; Paun, Silviu. - : Association for Computational Linguistics, 2019
BASE
Show details
10
Comparing Bayesian Models of Annotation
Paun, Silviu; Carpenter, Bob; Hovy, Dirk. - : Association for Computational Linguistics, 2018
BASE
Show details
11
A Probabilistic Annotation Model for Crowdsourcing Coreference
Kruschwitz, Udo; Chamberlain, Jon; Yu, Juntao. - : Association for Computational Linguistics, 2018
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
11
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern