DE eng

Search in the Catalogues and Directories

Hits 1 – 12 of 12

1
Universals of Linguistic Idiosyncrasy in Multilingual Computational Linguistics ; Universals of Linguistic Idiosyncrasy in Multilingual Computational Linguistics: Dagstuhl Seminar 21351
In: Universals of Linguistic Idiosyncrasy in Multilingual Computational Linguistics ; https://hal.archives-ouvertes.fr/hal-03507948 ; Universals of Linguistic Idiosyncrasy in Multilingual Computational Linguistics, Aug 2021, pp.89--138, 2021, 2192-5283. ⟨10.4230/DagRep.11.7.89⟩ ; https://gitlab.com/unlid/dagstuhl-seminar/-/wikis/home (2021)
BASE
Show details
2
Universals of Linguistic Idiosyncrasy in Multilingual Computational Linguistics (Dagstuhl Seminar 21351)
Croft, William; Savary, Agata; Baldwin, Timothy. - : Dagstuhl Reports. DagRep, Volume 11, Issue 7, 2021
BASE
Show details
3
Universal Dependencies 2.9
Zeman, Daniel; Nivre, Joakim; Abrams, Mitchell. - : Universal Dependencies Consortium, 2021
BASE
Show details
4
Universal Dependencies 2.8.1
Zeman, Daniel; Nivre, Joakim; Abrams, Mitchell. - : Universal Dependencies Consortium, 2021
BASE
Show details
5
Universal Dependencies 2.8
Zeman, Daniel; Nivre, Joakim; Abrams, Mitchell. - : Universal Dependencies Consortium, 2021
BASE
Show details
6
Universal Dependencies ...
BASE
Show details
7
Syntactic Nuclei in Dependency Parsing -- A Multilingual Exploration ...
Basirat, Ali; Nivre, Joakim. - : arXiv, 2021
BASE
Show details
8
Revisiting Negation in Neural Machine Translation ...
BASE
Show details
9
Universals of Linguistic Idiosyncrasy in Multilingual Computational Linguistics (Dagstuhl Seminar 21351) ...
Baldwin, Timothy; Croft, William; Nivre, Joakim. - : Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2021
BASE
Show details
10
Attention Can Reflect Syntactic Structure (If You Let It) ...
BASE
Show details
11
What Should/Do/Can LSTMs Learn When Parsing Auxiliary Verb Constructions? ...
NAACL 2021 2021; de Lhoneux, Miryam; Nivre, Joakim. - : Underline Science Inc., 2021
BASE
Show details
12
Schrödinger's Tree -- On Syntax and Neural Language Models ...
Kulmizev, Artur; Nivre, Joakim. - : arXiv, 2021
Abstract: In the last half-decade, the field of natural language processing (NLP) has undergone two major transitions: the switch to neural networks as the primary modeling paradigm and the homogenization of the training regime (pre-train, then fine-tune). Amidst this process, language models have emerged as NLP's workhorse, displaying increasingly fluent generation capabilities and proving to be an indispensable means of knowledge transfer downstream. Due to the otherwise opaque, black-box nature of such models, researchers have employed aspects of linguistic theory in order to characterize their behavior. Questions central to syntax -- the study of the hierarchical structure of language -- have factored heavily into such work, shedding invaluable insights about models' inherent biases and their ability to make human-like generalizations. In this paper, we attempt to take stock of this growing body of literature. In doing so, we observe a lack of clarity across numerous dimensions, which influences the hypotheses ... : preprint, submitted to Frontiers in Artificial Intelligence: Perspectives for Natural Language Processing between AI, Linguistics and Cognitive Science ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences
URL: https://arxiv.org/abs/2110.08887
https://dx.doi.org/10.48550/arxiv.2110.08887
BASE
Hide details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
12
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern