DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5 6
Hits 1 – 20 of 113

1
GreaseLM: Graph REASoning Enhanced Language Models for Question Answering ...
BASE
Show details
2
Universal Dependencies 2.9
Zeman, Daniel; Nivre, Joakim; Abrams, Mitchell. - : Universal Dependencies Consortium, 2021
BASE
Show details
3
Universal Dependencies 2.8.1
Zeman, Daniel; Nivre, Joakim; Abrams, Mitchell. - : Universal Dependencies Consortium, 2021
BASE
Show details
4
Universal Dependencies 2.8
Zeman, Daniel; Nivre, Joakim; Abrams, Mitchell. - : Universal Dependencies Consortium, 2021
BASE
Show details
5
Human-like informative conversations: Better acknowledgements using conditional mutual information ...
BASE
Show details
6
Universal Dependencies ...
BASE
Show details
7
ContractNLI: A Dataset for Document-level Natural Language Inference for Contracts ...
BASE
Show details
8
ContractNLI: A Dataset for Document-level Natural Language Inference for Contracts ...
BASE
Show details
9
Conditional probing: measuring usable information beyond a baseline ...
BASE
Show details
10
Mind Your Outliers! Investigating the Negative Impact of Outliers on Active Learning for Visual Question Answering ...
BASE
Show details
11
Human-like informative conversations: Better acknowledgements using conditional mutual information ...
NAACL 2021 2021; Manning, Christopher; Paranjape, Ashwin. - : Underline Science Inc., 2021
Abstract: Read the paper on the folowing link: https://www.aclweb.org/anthology/2021.naacl-main.61/ Abstract: This work aims to build a dialogue agent that can weave new factual content into conversations as naturally as humans. We draw insights from linguistic principles of conversational analysis and annotate human-human conversations from the Switchboard Dialog Act Corpus to examine humans strategies for acknowledgement, transition, detail selection and presentation. When current chatbots (explicitly provided with new factual content) introduce facts into a conversation, their generated responses do not acknowledge the prior turns. This is because models trained with two contexts - new factual content and conversational history - generate responses that are non-specific w.r.t. one of the contexts, typically the conversational history. We show that specificity w.r.t. conversational history is better captured by Pointwise Conditional Mutual Information (pcmih) than by the established use of Pointwise Mutual ...
URL: https://underline.io/lecture/19855-human-like-informative-conversations-better-acknowledgements-using-conditional-mutual-information
https://dx.doi.org/10.48448/knbp-hd43
BASE
Hide details
12
Biomedical and clinical English model packages for the Stanza Python NLP library
In: J Am Med Inform Assoc (2021)
BASE
Show details
13
Universal Dependencies 2.7
Zeman, Daniel; Nivre, Joakim; Abrams, Mitchell. - : Universal Dependencies Consortium, 2020
BASE
Show details
14
Universal Dependencies 2.6
Zeman, Daniel; Nivre, Joakim; Abrams, Mitchell. - : Universal Dependencies Consortium, 2020
BASE
Show details
15
Stanza: A Python Natural Language Processing Toolkit for Many Human Languages ...
Qi, Peng; Zhang, Yuhao; Zhang, Yuhui. - : arXiv, 2020
BASE
Show details
16
Universal Dependencies v2: An Evergrowing Multilingual Treebank Collection ...
BASE
Show details
17
Syn-QG: Syntactic and Shallow Semantic Rules for Question Generation ...
BASE
Show details
18
Finding Universal Grammatical Relations in Multilingual BERT ...
BASE
Show details
19
Universal Dependencies 2.5
Zeman, Daniel; Nivre, Joakim; Abrams, Mitchell. - : Universal Dependencies Consortium, 2019
BASE
Show details
20
Universal Dependencies 2.4
Nivre, Joakim; Abrams, Mitchell; Agić, Željko. - : Universal Dependencies Consortium, 2019
BASE
Show details

Page: 1 2 3 4 5 6

Catalogues
7
2
12
0
1
0
2
Bibliographies
20
0
1
0
0
0
1
1
10
Linked Open Data catalogues
0
Online resources
4
0
0
0
Open access documents
62
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern