1 |
Learning Stress Patterns with a Sequence-to-Sequence Neural Network
|
|
|
|
In: Proceedings of the Society for Computation in Linguistics (2022)
|
|
BASE
|
|
Show details
|
|
2 |
Learning Repetition, but not Syllable Reversal
|
|
|
|
In: Proceedings of the Annual Meetings on Phonology; Proceedings of the 2020 Annual Meeting on Phonology ; 2377-3324 (2021)
|
|
BASE
|
|
Show details
|
|
4 |
French schwa and gradient cumulativity
|
|
|
|
In: Glossa: a journal of general linguistics; Vol 5, No 1 (2020); 24 ; 2397-1835 (2020)
|
|
BASE
|
|
Show details
|
|
5 |
Assimilation triggers metathesis in Balantak: Implications for theories of possible repair in Optimality Theory
|
|
|
|
In: University of Massachusetts Occasional Papers in Linguistics (2020)
|
|
BASE
|
|
Show details
|
|
8 |
Learning Reduplication with a Neural Network without Explicit Variables
|
|
|
|
In: Joe Pater (2019)
|
|
BASE
|
|
Show details
|
|
9 |
Phonological typology in Optimality Theory and Formal Language Theory: Goals and future directions
|
|
|
|
In: Joe Pater (2019)
|
|
BASE
|
|
Show details
|
|
10 |
Learning syntactic parameters without triggers by assigning credit and blame
|
|
|
|
In: Joe Pater (2019)
|
|
BASE
|
|
Show details
|
|
11 |
Generative linguistics and neural networks at 60: foundation, friction, and fusion
|
|
|
|
In: Joe Pater (2019)
|
|
Abstract:
The birthdate of both generative linguistics and neural networks can be taken as 1957, the year of the publication of foundational work by both Noam Chomsky and Frank Rosenblatt. This paper traces the development of these two approaches to cognitive science, from their largely autonomous early development in their first thirty years, through their collision in the 1980s around the past tense debate (Rumelhart and McClelland 1986, Pinker and Prince 1988), and their integration in much subsequent work up to the present. Although this integration has produced a considerable body of results, the continued general gulf between these two lines of research is likely impeding progress in both: on learning in generative linguistics, and on the representation of language in neural modeling. The paper concludes with a brief argument that generative linguistics is unlikely to fulfill its promise of accounting for language learning if it continues to maintain its distance from neural and statistical approaches to learning.
|
|
Keyword:
Artificial Intelligence and Robotics; Cognitive Psychology; Computational Linguistics; Linguistics
|
|
URL: https://works.bepress.com/joe_pater/35
|
|
BASE
|
|
Hide details
|
|
12 |
Preface: SCiL 2019 Editors’ Note
|
|
|
|
In: Proceedings of the Society for Computation in Linguistics (2019)
|
|
BASE
|
|
Show details
|
|
13 |
Substance matters: A reply to Jardine 2016
|
|
|
|
In: Joe Pater (2018)
|
|
BASE
|
|
Show details
|
|
14 |
Seq2Seq Models with Dropout can Learn Generalizable Reduplication
|
|
|
|
In: Joe Pater (2018)
|
|
BASE
|
|
Show details
|
|
15 |
Preface: SCiL 2018 Editors’ Note
|
|
|
|
In: Proceedings of the Society for Computation in Linguistics (2018)
|
|
BASE
|
|
Show details
|
|
20 |
Gradient Exceptionality in Maximum Entropy Grammar with Lexically Specific Constraints
|
|
|
|
BASE
|
|
Show details
|
|
|
|