DE eng

Search in the Catalogues and Directories

Hits 1 – 3 of 3

1
MOLEMAN: Mention-Only Linking of Entities with a Mention Annotation Network ...
BASE
Show details
2
MOLEMAN: Mention-Only Linking of Entities with a Mention Annotation Network ...
BASE
Show details
3
Neural Models for Large-Scale Semantic Role Labelling
Abstract: Thesis (Ph.D.)--University of Washington, 2018 ; Recovering predicate-argument structures from natural language sentences is an important task in natural language processing (NLP), where the goal is to identify ``who did what to whom'' with respect to events described in a sentence. A key challenge in this task is sparsity of labeled data: a given predicate-role instance may only occur a handful of times in the training set. While attempts have been made to collect large, diverse datasets which could help mitigate this sparseness, these effort are hampered by the difficulty inherent in labelling traditional SRL formalisms such as PropBank and FrameNet. We take a two-pronged approach to solving these issues. First, we develop models which can be used to jointly represent multiple SRL annotation schemes, allowing us to pool annotations between multiple datasets. We present a new method for semantic role labeling in which arguments and semantic roles are jointly embedded in a shared vector space for a given predicate. We further show how the model can learn jointly from PropBank and FrameNet annotations to obtain additional improvements on the smaller FrameNet dataset. Next, we demonstrate that crowdsourcing techniques can be used to collect a large, high-quality SRL dataset at much lower cost than previous methods, and that this data can be used to learn a high-quality SRL parser. Our corpus, QA-SRL Bank 2.0, consists of over 250,000 question-answer pairs for over 64,000 sentences across 3 domains and was gathered with a new crowd-sourcing scheme that we show has high precision and good recall at modest cost. We also present neural models for two QA-SRL subtasks: detecting argument spans for a predicate and generating questions to label the semantic relationship. Finally, we combine these two approaches, investigating whether QA-SRL annotations can be used to improve perfomance on PropBank in a multitask learning setup. We find that using the QA-SRL data improves performance in regimes with small amounts of in-domain PropBank data, but that these improvements are overshadowed by those obtained by using deep contextual word representations trained on large amounts of unlabeled text, raising important questions for future work as to the utility of multitask training relative to these unsupervised approaches.
Keyword: Computer science; Computer science and engineering; Deep Learning; Frame Semantic Parsing; Linguistics; Natural Language Processing; Parsing; Semantic Role Labelling; Semantics
URL: http://hdl.handle.net/1773/43017
BASE
Hide details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
3
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern