DE eng

Search in the Catalogues and Directories

Hits 1 – 11 of 11

1
How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN ...
BASE
Show details
2
Picking BERT's Brain: Probing for Linguistic Dependencies in Contextualized Embeddings Using Representational Similarity Analysis ...
BASE
Show details
3
Universal linguistic inductive biases via meta-learning ...
BASE
Show details
4
Representations of Syntax [MASK] Useful: Effects of Constituency and Dependency Structure in Recursive LSTMs ...
BASE
Show details
5
Does Syntax Need to Grow on Trees? Sources of Hierarchical Inductive Bias in Sequence-to-Sequence Networks
In: Transactions of the Association for Computational Linguistics, Vol 8, Pp 125-140 (2020) (2020)
BASE
Show details
6
RNNs Implicitly Implement Tensor Product Representations
In: International Conference on Learning Representations ; ICLR 2019 - International Conference on Learning Representations ; https://hal.archives-ouvertes.fr/hal-02274498 ; ICLR 2019 - International Conference on Learning Representations, May 2019, New Orleans, United States (2019)
BASE
Show details
7
What do you learn from context? Probing for sentence structure in contextualized word representations ...
BASE
Show details
8
Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference ...
BASE
Show details
9
BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance ...
BASE
Show details
10
Revisiting the poverty of the stimulus: hierarchical generalization without a hierarchical bias in recurrent neural networks ...
Abstract: Syntactic rules in natural language typically need to make reference to hierarchical sentence structure. However, the simple examples that language learners receive are often equally compatible with linear rules. Children consistently ignore these linear explanations and settle instead on the correct hierarchical one. This fact has motivated the proposal that the learner's hypothesis space is constrained to include only hierarchical rules. We examine this proposal using recurrent neural networks (RNNs), which are not constrained in such a way. We simulate the acquisition of question formation, a hierarchical transformation, in a fragment of English. We find that some RNN architectures tend to learn the hierarchical rule, suggesting that hierarchical cues within the language, combined with the implicit architectural biases inherent in certain RNNs, may be sufficient to induce hierarchical generalizations. The likelihood of acquiring the hierarchical generalization increased when the language included an ... : Proceedings of the 40th Annual Conference of the Cognitive Science Society; 10 pages ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences
URL: https://arxiv.org/abs/1802.09091
https://dx.doi.org/10.48550/arxiv.1802.09091
BASE
Hide details
11
TAG Parsing with Neural Networks and Vector Representations of Supertags
In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, ; Conference on Empirical Methods in Natural Language Processing ; https://hal.archives-ouvertes.fr/hal-01771494 ; Conference on Empirical Methods in Natural Language Processing, Sep 2017, Copenhague, Denmark. pp.1712 - 1722 (2017)
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
11
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern