DE eng

Search in the Catalogues and Directories

Hits 1 – 7 of 7

1
Winoground: Probing Vision and Language Models for Visio-Linguistic Compositionality ...
BASE
Show details
2
ANLIzing the Adversarial Natural Language Inference Dataset
In: Proceedings of the Society for Computation in Linguistics (2022)
BASE
Show details
3
Investigating Novel Verb Learning in BERT: Selectional Preference Classes and Alternation-Based Syntactic Generalization
In: Association for Computational Linguistics (2021)
BASE
Show details
4
Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection ...
BASE
Show details
5
Improving Question Answering Model Robustness with Synthetic Adversarial Data Generation ...
BASE
Show details
6
Compositional Neural Machine Translation by Removing the Lexicon from Syntax ...
Thrush, Tristan. - : arXiv, 2020
BASE
Show details
7
SAL : a Self-Aware Learning system ; Self-Aware Learning system
Thrush, Tristan Andrew Fraser.. - : Massachusetts Institute of Technology, 2019
Abstract: This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. ; Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019 ; Cataloged from student-submitted PDF of thesis. ; Includes bibliographical references (pages 67-68). ; In this thesis, I take a step towards understanding how and why humans learn to solve problems about their solving of problems. I present a general-purpose neural reinforcement learning system called SAL, which can learn to think about its own problem solving, and use this capability to learn how to solve problems at another level. I show that SAL can use self-reference to articulate, and learn to articulate, its thoughts to a human, and internalize and apply a human's help, in natural language. I also demonstrate that SAL's abilities are enabled by an internal representation that shares important properties with, and is easily converted between, natural language. On the practical side, I argue that SAL can inform production question answering systems research. SAL can answer multi-step questions that are grounded in the world by extracting operational knowledge from pre-trained word embeddings. As an example, SAL knows how to use the action associated with \grab [the] diesel jug" to get closer to a solution, given the state of a physical world and a goal. And SAL can do this without any actual experience using (and without ever being told by a human about) any action associated with \grab" or the argument \diesel jug." SAL can do so with both very little training reward data and without assuming anything about the operational meaning of a particular lexical item, or composition of them, at first. Alternatively, typical neural reinforcement learning systems can not learn like SAL; they only work with a level of data that would be difficult to achieve in the real world. SAL's implementation, trained models, analysis code, and instructions, are at https://github.com/TristanThrush/sal. It is easy to add new problems (even in new domains) that you want SAL to learn. ; by Tristan Andrew Fraser Thrush. ; M. Eng. ; M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Keyword: Electrical Engineering and Computer Science
URL: https://hdl.handle.net/1721.1/127705
BASE
Hide details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
7
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern