4 |
Structural Supervision Improves Few-Shot Learning and Syntactic Generalization in Neural Language Models
|
|
|
|
In: Association for Computational Linguistics (2021)
|
|
BASE
|
|
Show details
|
|
5 |
Structural Supervision Improves Learning of Non-Local Grammatical Dependencies
|
|
|
|
In: Association for Computational Linguistics (2021)
|
|
BASE
|
|
Show details
|
|
6 |
Event-Driven News Stream Clustering using Entity-Aware Contextual Embeddings ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
On the Evolution of Syntactic Information Encoded by BERT's Contextualized Representations ...
|
|
|
|
Abstract:
The adaptation of pretrained language models to solve supervised tasks has become a baseline in NLP, and many recent works have focused on studying how linguistic information is encoded in the pretrained sentence representations. Among other information, it has been shown that entire syntax trees are implicitly embedded in the geometry of such models. As these models are often fine-tuned, it becomes increasingly important to understand how the encoded knowledge evolves along the fine-tuning. In this paper, we analyze the evolution of the embedded syntax trees along the fine-tuning process of BERT for six different tasks, covering all levels of the linguistic structure. Experimental results show that the encoded syntactic information is forgotten (PoS tagging), reinforced (dependency and constituency parsing) or preserved (semantics-related tasks) in different ways along the fine-tuning process depending on the task. ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences
|
|
URL: https://arxiv.org/abs/2101.11492 https://dx.doi.org/10.48550/arxiv.2101.11492
|
|
BASE
|
|
Hide details
|
|
9 |
How much pretraining data do language models need to learn syntax? ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Universal Dependencies 2.2
|
|
|
|
In: https://hal.archives-ouvertes.fr/hal-01930733 ; 2018 (2018)
|
|
BASE
|
|
Show details
|
|
18 |
Multilingual Neural Machine Translation with Task-Specific Attention ...
|
|
|
|
BASE
|
|
Show details
|
|
19 |
Scheduled Multi-Task Learning: From Syntax to Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|