DE eng

Search in the Catalogues and Directories

Hits 1 – 3 of 3

1
Revisiting Self-training for Few-shot Learning of Language Model ...
Abstract: Anthology paper link: https://aclanthology.org/2021.emnlp-main.718/ Abstract: As unlabeled data carry rich task-relevant information, they are proven useful for few-shot learning of language model. The question is how to effectively make use of such data. In this work, we revisit the self-training technique for language model fine-tuning and present a state-of-the-art prompt-based few-shot learner, SFLM. Given two views of a text sample via weak and strong augmentation techniques, SFLM generates a pseudo label on the weakly augmented version. Then, the model predicts the same pseudo label when fine-tuned with the strongly augmented version. This simple approach is shown to outperform other state-of-the-art supervised and semi-supervised counterparts on six sentence classification and six sentence-pair classification benchmarking tasks. In addition, SFLM only relies on a few in-domain unlabeled data. We conduct a comprehensive analysis to demonstrate the robustness of our proposed approach under various ...
Keyword: Computational Linguistics; Language Models; Machine Learning; Machine Learning and Data Mining; Natural Language Processing
URL: https://underline.io/lecture/38073-revisiting-self-training-for-few-shot-learning-of-language-model
https://dx.doi.org/10.48448/n28v-0y36
BASE
Hide details
2
DynaEval: Unifying Turn and Dialogue Level Evaluation ...
BASE
Show details
3
Bootstrapped Unsupervised Sentence Representation Learning ...
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
3
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern