DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5 6 7 8 9...63
Hits 81 – 100 of 1.255

81
How effective is BERT without word ordering? Implications for language understanding and data privacy ...
BASE
Show details
82
GEM: Natural Language Generation, Evaluation, and Metrics - Part 4 ...
BASE
Show details
83
The statistical advantage of automatic NLG metrics at the system level ...
BASE
Show details
84
Counter-Argument Generation by Attacking Weak Premises ...
BASE
Show details
85
Supporting Cognitive and Emotional Empathic Writing of Students ...
BASE
Show details
86
What's in the Box? An Analysis of Undesirable Content in the Common Crawl Corpus ...
BASE
Show details
87
Are Pretrained Convolutions Better than Pretrained Transformers? ...
BASE
Show details
88
Evaluation Examples are not Equally Informative: How should that change NLP Leaderboards? ...
BASE
Show details
89
Beyond Offline Mapping: Learning Cross-lingual Word Embeddings through Context Anchoring ...
BASE
Show details
90
Hate Speech Detection Based on Sentiment Knowledge Sharing ...
BASE
Show details
91
Taming Pre-trained Language Models with N-gram Representations for Low-Resource Domain Adaptation ...
Abstract: Read paper: https://www.aclanthology.org/2021.acl-long.259 Abstract: Large pre-trained models such as BERT are known to improve different downstream NLP tasks, even when such a model is trained on a generic domain. Moreover, recent studies have shown that when large domain-specific corpora are available, continued pre-training on domain-specific data can further improve the performance of in-domain tasks. However, this practice requires significant domain-specific data and computational resources which may not always be available. In this paper, we aim to adapt a generic pretrained model with a relatively small amount of domain-specific data. We demonstrate that by explicitly incorporating multi-granularity information of unseen and domain-specific words via the adaptation of (word based) n-grams, the performance of a generic pretrained model can be greatly improved. Specifically, we introduce a Transformer-based Domain-aware N-gram Adaptor, T-DNA, to effectively learn and incorporate the semantic ...
Keyword: Computational Linguistics; Condensed Matter Physics; Deep Learning; Electromagnetism; FOS Physical sciences; Information and Knowledge Engineering; Neural Network; Semantics
URL: https://dx.doi.org/10.48448/yw8k-fy60
https://underline.io/lecture/25595-taming-pre-trained-language-models-with-n-gram-representations-for-low-resource-domain-adaptation
BASE
Hide details
92
Tail-to-Tail Non-Autoregressive Sequence Prediction for Chinese Grammatical Error Correction ...
BASE
Show details
93
WikiSum: Coherent Summarization Dataset for Efficient Human-Evaluation ...
BASE
Show details
94
An End-to-End Progressive Multi-Task Learning Framework for Medical Named Entity Recognition and Normalization ...
BASE
Show details
95
How does Attention Affect the Model? ...
BASE
Show details
96
Improve Query Focused Abstractive Summarization by Incorporating Answer Relevance ...
BASE
Show details
97
Missing Modality Imagination Network for Emotion Recognition with Uncertain Missing Modalities ...
BASE
Show details
98
Neural Machine Translation with Monolingual Translation Memory ...
BASE
Show details
99
Using Meta-Knowledge Mined from Identifiers to Improve Intent Recognition in Conversational Systems ...
BASE
Show details
100
Modeling Transitions of Focal Entities for Conversational Knowledge Base Question Answering ...
BASE
Show details

Page: 1 2 3 4 5 6 7 8 9...63

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
1.255
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern