Home
Catalogue search
Refine your search:
Keyword
Creator / Publisher:
Linzen, Tal (8)
McCoy, R. Thomas (8)
Frank, Robert (3)
Smolensky, Paul (3)
Lepori, Michael A. (2)
Mccoy, R. Thomas (2)
Pavlick, Ellie (2)
Aix Marseille Université (AMU)-Université de Toulon (UTLN)-Centre National de la Recherche Scientifique (CNRS) (1)
Apprentissage machine et développement cognitif (CoML) (1)
Bowman, Samuel R. (1)
more
Year
Medium
Type
BLLDB-Access
Search in the Catalogues and Directories
All fields
Title
Creator / Publisher
Keyword
Year
AND
OR
AND NOT
All fields
Title
Creator / Publisher
Keyword
Year
AND
OR
AND NOT
All fields
Title
Creator / Publisher
Keyword
Year
AND
OR
AND NOT
All fields
Title
Creator / Publisher
Keyword
Year
AND
OR
AND NOT
All fields
Title
Creator / Publisher
Keyword
Year
Sort by
creator [A → Z]
'
creator [Z → A]
'
publishing year ↑ (asc)
'
publishing year ↓ (desc)
'
title [A → Z]
'
title [Z → A]
'
Simple Search
Hits 1 – 11 of 11
1
How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN ...
McCoy, R. Thomas
;
Smolensky, Paul
;
Linzen, Tal
. - : arXiv, 2021
BASE
Show details
2
Picking BERT's Brain: Probing for Linguistic Dependencies in Contextualized Embeddings Using Representational Similarity Analysis ...
Lepori, Michael A.
;
McCoy, R. Thomas
. - : arXiv, 2020
BASE
Show details
3
Universal linguistic inductive biases via meta-learning ...
McCoy, R. Thomas
;
Grant, Erin
;
Smolensky, Paul
. - : arXiv, 2020
BASE
Show details
4
Representations of Syntax [MASK] Useful: Effects of Constituency and Dependency Structure in Recursive LSTMs ...
Lepori, Michael A.
;
Linzen, Tal
;
McCoy, R. Thomas
. - : arXiv, 2020
BASE
Show details
5
Does Syntax Need to Grow on Trees? Sources of Hierarchical Inductive Bias in Sequence-to-Sequence Networks
McCoy, R. Thomas
;
Frank, Robert
;
Linzen, Tal
In: Transactions of the Association for Computational Linguistics, Vol 8, Pp 125-140 (2020) (2020)
Abstract:
Learners that are exposed to the same training data might generalize differently due to differing inductive biases. In neural network models, inductive biases could in theory arise from any aspect of the model architecture. We investigate which architectural factors affect the generalization behavior of neural sequence-to-sequence models trained on two syntactic tasks, English question formation and English tense reinflection. For both tasks, the training set is consistent with a generalization based on hierarchical structure and a generalization based on linear order. All architectural factors that we investigated qualitatively affected how models generalized, including factors with no clear connection to hierarchical structure. For example, LSTMs and GRUs displayed qualitatively different inductive biases. However, the only factor that consistently contributed a hierarchical bias across tasks was the use of a tree-structured model rather than a model with sequential recurrence, suggesting that human-like syntactic generalization requires architectural syntactic structure.
Keyword:
Computational linguistics. Natural language processing
;
P98-98.5
URL:
https://doaj.org/article/ca442dfb7bd44ccf991dc7158480ae51
https://doi.org/10.1162/tacl_a_00304
BASE
Hide details
6
RNNs Implicitly Implement Tensor Product Representations
Mccoy, R. Thomas
;
Linzen, Tal
;
Dunbar, Ewan
...
In: International Conference on Learning Representations ; ICLR 2019 - International Conference on Learning Representations ; https://hal.archives-ouvertes.fr/hal-02274498 ; ICLR 2019 - International Conference on Learning Representations, May 2019, New Orleans, United States (2019)
BASE
Show details
7
What do you learn from context? Probing for sentence structure in contextualized word representations ...
Tenney, Ian
;
Xia, Patrick
;
Chen, Berlin
. - : arXiv, 2019
BASE
Show details
8
Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference ...
McCoy, R. Thomas
;
Pavlick, Ellie
;
Linzen, Tal
. - : arXiv, 2019
BASE
Show details
9
BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance ...
McCoy, R. Thomas
;
Min, Junghyun
;
Linzen, Tal
. - : arXiv, 2019
BASE
Show details
10
Revisiting the poverty of the stimulus: hierarchical generalization without a hierarchical bias in recurrent neural networks ...
McCoy, R. Thomas
;
Frank, Robert
;
Linzen, Tal
. - : arXiv, 2018
BASE
Show details
11
TAG Parsing with Neural Networks and Vector Representations of Supertags
Kasai, Jungo
;
Frank, Robert
;
Mccoy, R. Thomas
...
In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, ; Conference on Empirical Methods in Natural Language Processing ; https://hal.archives-ouvertes.fr/hal-01771494 ; Conference on Empirical Methods in Natural Language Processing, Sep 2017, Copenhague, Denmark. pp.1712 - 1722 (2017)
BASE
Show details
Mobile view
All
Catalogues
UB Frankfurt Linguistik
0
IDS Mannheim
0
OLC Linguistik
0
UB Frankfurt Retrokatalog
0
DNB Subject Category Language
0
Institut für Empirische Sprachwissenschaft
0
Leibniz-Centre General Linguistics (ZAS)
0
Bibliographies
BLLDB
0
BDSL
0
IDS Bibliografie zur deutschen Grammatik
0
IDS Bibliografie zur Gesprächsforschung
0
IDS Konnektoren im Deutschen
0
IDS Präpositionen im Deutschen
0
IDS OBELEX meta
0
MPI-SHH Linguistics Collection
0
MPI for Psycholinguistics
0
Linked Open Data catalogues
Annohub
0
Online resources
Link directory
0
Journal directory
0
Database directory
0
Dictionary directory
0
Open access documents
BASE
11
Linguistik-Repository
0
IDS Publikationsserver
0
Online dissertations
0
Language Description Heritage
0
© 2013 - 2024 Lin|gu|is|tik
|
Imprint
|
Privacy Policy
|
Datenschutzeinstellungen ändern