1 |
Examining the Inductive Bias of Neural Language Models with Artificial Languages ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Examining the Inductive Bias of Neural Language Models with Artificial Languages ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Cultural Variance in Reception and Interpretation of Social Media COVID-19 Disinformation in French-Speaking Regions
|
|
|
|
In: International Journal of Environmental Research and Public Health; Volume 18; Issue 23; Pages: 12624 (2021)
|
|
BASE
|
|
Show details
|
|
4 |
A Non-Linear Structural Probe
|
|
|
|
In: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2021)
|
|
BASE
|
|
Show details
|
|
5 |
Examining the Inductive Bias of Neural Language Models with Artificial Languages
|
|
|
|
In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (2021)
|
|
Abstract:
Since language models are used to model a wide variety of languages, it is natural to ask whether the neural architectures used for the task have inductive biases towards modeling particular types of languages. Investigation of these biases has proved complicated due to the many variables that appear in the experimental setup. Languages vary in many typological dimensions, and it is difficult to single out one or two to investigate without the others acting as confounders. We propose a novel method for investigating the inductive biases of language models using artificial languages. These languages are constructed to allow us to create parallel corpora across languages that differ only in the typological feature being investigated, such as word order. We then use them to train and test language models. This constitutes a fully controlled causal framework, and demonstrates how grammar engineering can serve as a useful tool for analyzing neural models. Using this method, we find that commonly used neural architectures exhibit different inductive biases: LSTMs display little preference with respect to word ordering, while transformers display a clear preference for some orderings over others. Further, we find that neither the inductive bias of the LSTM nor that of the transformer appear to reflect any tendencies that we see in attested natural languages.
|
|
URL: https://hdl.handle.net/20.500.11850/521265 https://doi.org/10.3929/ethz-b-000519004
|
|
BASE
|
|
Hide details
|
|
7 |
SIGMORPHON 2020 Shared Task 0: Typologically Diverse Morphological Inflection ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
What is needed in culturally competent healthcare systems? A qualitative exploration of culturally diverse patients and professional interpreters in an Australian healthcare setting ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
What is needed in culturally competent healthcare systems? A qualitative exploration of culturally diverse patients and professional interpreters in an Australian healthcare setting ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
What is needed in culturally competent healthcare systems? A qualitative exploration of culturally diverse patients and professional interpreters in an Australian healthcare setting
|
|
|
|
BASE
|
|
Show details
|
|
11 |
Dialectical, Behaviour Therapy for Aboriginal children and adolescents in residential care: A feasibility study
|
|
|
|
BASE
|
|
Show details
|
|
|
|