DE eng

Search in the Catalogues and Directories

Hits 1 – 10 of 10

1
Training dynamics of neural language models ...
Saphra, Naomi. - : The University of Edinburgh, 2021
BASE
Show details
2
A Non-Linear Structural Probe
In: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2021)
BASE
Show details
3
A Non-Linear Structural Probe ...
BASE
Show details
4
Training dynamics of neural language models
Saphra, Naomi. - : The University of Edinburgh, 2021
Abstract: Why do artificial neural networks model language so well? We claim that in order to answer this question and understand the biases that lead to such high performing language models---and all models that handle language---we must analyze the training process. For decades, linguists have used the tools of developmental linguistics to study human bias towards linguistic structure. Similarly, we wish to consider a neural network's training dynamics, i.e., the analysis of training in practice and the study of why our optimization methods work when applied. This framing shows us how structural patterns and linguistic properties are gradually built up, revealing more about why LSTM models learn so effectively on language data. To explore these questions, we might be tempted to appropriate methods from developmental linguistics, but we do not wish to make cognitive claims, so we avoid analogizing between human and artificial language learners. We instead use mathematical tools designed for investigating language model training dynamics. These tools can take advantage of crucial differences between child development and model training: we have access to activations, weights, and gradients in a learning model, and can manipulate learning behavior directly or by perturbing inputs. While most research in training dynamics has focused on vision tasks, language offers direct annotation of its well-documented and intuitive latent hierarchical structures (e.g., syntax and semantics) and is therefore an ideal domain for exploring the effect of training dynamics on the representation of such structure. Focusing on LSTM models, we investigate the natural sparsity of gradients and activations, finding that word representations are focused on just a few neurons late in training. Similarity analysis reveals how word embeddings learned for different tasks are highly similar at the beginning of training, but gradually become task-specific. Using synthetic data and measuring feature interactions, we also discover that hierarchical representations in LSTMs may be a result of their learning strategy: they tend to build new trees out of familiar phrases, by mingling together the meaning of constituents so they depend on each other. These discoveries constitute just a few possible explanations for how LSTMs learn generalized language representations, with further theories on more architectures to be uncovered by the growing field of NLP training dynamics.
Keyword: interpretability; NLP; training dynamics
URL: https://hdl.handle.net/1842/38154
BASE
Hide details
5
Pareto Probing: Trading Off Accuracy for Complexity
In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)
BASE
Show details
6
Pareto Probing: Trading Off Accuracy for Complexity ...
BASE
Show details
7
Understanding Learning Dynamics Of Language Models with SVCCA ...
Saphra, Naomi; Lopez, Adam. - : arXiv, 2018
BASE
Show details
8
Pynlpl: V0.7.7.1 ...
BASE
Show details
9
Pynlpl: V0.7.7 ...
BASE
Show details
10
A framework for (under)specifying dependency syntax without overloading annotators ...
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
10
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern