DE eng

Search in the Catalogues and Directories

Hits 1 – 11 of 11

1
How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN ...
BASE
Show details
2
Picking BERT's Brain: Probing for Linguistic Dependencies in Contextualized Embeddings Using Representational Similarity Analysis ...
BASE
Show details
3
Universal linguistic inductive biases via meta-learning ...
BASE
Show details
4
Representations of Syntax [MASK] Useful: Effects of Constituency and Dependency Structure in Recursive LSTMs ...
BASE
Show details
5
Does Syntax Need to Grow on Trees? Sources of Hierarchical Inductive Bias in Sequence-to-Sequence Networks
In: Transactions of the Association for Computational Linguistics, Vol 8, Pp 125-140 (2020) (2020)
Abstract: Learners that are exposed to the same training data might generalize differently due to differing inductive biases. In neural network models, inductive biases could in theory arise from any aspect of the model architecture. We investigate which architectural factors affect the generalization behavior of neural sequence-to-sequence models trained on two syntactic tasks, English question formation and English tense reinflection. For both tasks, the training set is consistent with a generalization based on hierarchical structure and a generalization based on linear order. All architectural factors that we investigated qualitatively affected how models generalized, including factors with no clear connection to hierarchical structure. For example, LSTMs and GRUs displayed qualitatively different inductive biases. However, the only factor that consistently contributed a hierarchical bias across tasks was the use of a tree-structured model rather than a model with sequential recurrence, suggesting that human-like syntactic generalization requires architectural syntactic structure.
Keyword: Computational linguistics. Natural language processing; P98-98.5
URL: https://doaj.org/article/ca442dfb7bd44ccf991dc7158480ae51
https://doi.org/10.1162/tacl_a_00304
BASE
Hide details
6
RNNs Implicitly Implement Tensor Product Representations
In: International Conference on Learning Representations ; ICLR 2019 - International Conference on Learning Representations ; https://hal.archives-ouvertes.fr/hal-02274498 ; ICLR 2019 - International Conference on Learning Representations, May 2019, New Orleans, United States (2019)
BASE
Show details
7
What do you learn from context? Probing for sentence structure in contextualized word representations ...
BASE
Show details
8
Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference ...
BASE
Show details
9
BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance ...
BASE
Show details
10
Revisiting the poverty of the stimulus: hierarchical generalization without a hierarchical bias in recurrent neural networks ...
BASE
Show details
11
TAG Parsing with Neural Networks and Vector Representations of Supertags
In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, ; Conference on Empirical Methods in Natural Language Processing ; https://hal.archives-ouvertes.fr/hal-01771494 ; Conference on Empirical Methods in Natural Language Processing, Sep 2017, Copenhague, Denmark. pp.1712 - 1722 (2017)
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
11
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern