1 |
Crowdsourcing and curation: perspectives from biology and natural language processing.
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Overview of the interactive task in BioCreative V
|
|
|
|
In: ISSN: 1758-0463 ; EISSN: 1758-0463 ; Database - The journal of Biological Databases and Curation ; https://hal.archives-ouvertes.fr/hal-01469079 ; Database - The journal of Biological Databases and Curation, Oxford University Press, 2016, 2016, ⟨10.1093/database/baw119⟩ ; https://academic.oup.com/database/article-lookup/doi/10.1093/database/baw119 (2016)
|
|
BASE
|
|
Show details
|
|
3 |
Crowdsourcing and curation: perspectives from biology and natural language processing
|
|
|
|
BASE
|
|
Show details
|
|
4 |
Distribution of ambiguous synonyms in Fly, Mouse and Yeast task 1B lexical resources ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Sample abstract with unique gene identifiers, plus excerpt from lexicon ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Distribution of ambiguous synonyms in Fly, Mouse and Yeast task 1B lexical resources ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Sample abstract with unique gene identifiers, plus excerpt from lexicon ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Principles of Evaluation in Natural Language Processing
|
|
|
|
In: ISSN: 1248-9433 ; EISSN: 1965-0906 ; Revue TAL ; https://hal.archives-ouvertes.fr/hal-00502700 ; Revue TAL, ATALA (Association pour le Traitement Automatique des Langues), 2007, 48 (1), pp.7-31 (2007)
|
|
BASE
|
|
Show details
|
|
10 |
The MiTAP System for Monitoring Reports of Disease Outbreak
|
|
|
|
In: DTIC (2006)
|
|
BASE
|
|
Show details
|
|
12 |
Overview of BioCreAtIvE: critical assessment of information extraction for biology
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Critical Assessment of Information Extraction Systems in Biology
|
|
|
|
BASE
|
|
Show details
|
|
19 |
Integrated Feasibility Experiment for Bio-Security: IFE-Bio, A TIDES Demonstration
|
|
|
|
In: DTIC (2001)
|
|
BASE
|
|
Show details
|
|
20 |
Automating Coreference: The Role of Annotated Training Data ...
|
|
|
|
Abstract:
We report here on a study of interannotator agreement in the coreference task as defined by the Message Understanding Conference (MUC-6 and MUC-7). Based on feedback from annotators, we clarified and simplified the annotation specification. We then performed an analysis of disagreement among several annotators, concluding that only 16% of the disagreements represented genuine disagreement about coreference; the remainder of the cases were mostly typographical errors or omissions, easily reconciled. Initially, we measured interannotator agreement in the low 80s for precision and recall. To try to improve upon this, we ran several experiments. In our final experiment, we separated the tagging of candidate noun phrases from the linking of actual coreferring expressions. This method shows promise - interannotator agreement climbed to the low 90s - but it needs more extensive validation. These results position the research community to broaden the coreference task to multiple languages, and possibly to different ... : 4 pages, 5 figures. To appear in the AAAI Spring Symposium on Applying Machine Learning to Discourse Processing. The Alembic Workbench annotation tool described in this paper is available at http://www.mitre.org/resources/centers/advanced_info/g04h/workbench.html ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences
|
|
URL: https://arxiv.org/abs/cmp-lg/9803001 https://dx.doi.org/10.48550/arxiv.cmp-lg/9803001
|
|
BASE
|
|
Hide details
|
|
|
|