1 |
Integrating Vectorized Lexical Constraints for Neural Machine Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Contextual Semantic-Guided Entity-Centric GCN for Relation Extraction
|
|
|
|
In: Mathematics; Volume 10; Issue 8; Pages: 1344 (2022)
|
|
BASE
|
|
Show details
|
|
3 |
Virtual Reality-Integrated Immersion-Based Teaching to English Language Learning Outcome
|
|
|
|
In: Front Psychol (2022)
|
|
BASE
|
|
Show details
|
|
5 |
Alternated Training with Synthetic and Authentic Data for Neural Machine Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
CPM-2: Large-scale Cost-effective Pre-trained Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
VISITRON: Visual Semantics-Aligned Interactively Trained Object-Navigator ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Assessing Multilingual Fairness in Pre-trained Multimodal Representations ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Dialog{S}um: {A} Real-Life Scenario Dialogue Summarization Dataset ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Transfer Learning for Sequence Generation: from Single-source to Multi-source ...
|
|
|
|
Abstract:
Read paper: https://www.aclanthology.org/2021.acl-long.446 Abstract: Multi-source sequence generation (MSG) is an important kind of sequence generation tasks that takes multiple sources, including automatic post-editing, multi-source translation, multi-document summarization, etc. As MSG tasks suffer from the data scarcity problem and recent pretrained models have been proven to be effective for low-resource downstream tasks, transferring pretrained sequence-to-sequence models to MSG tasks is essential. Although directly finetuning pretrained models on MSG tasks and concatenating multiple sources into a single long sequence is regarded as a simple method to transfer pretrained models to MSG tasks, we conjecture that the direct finetuning method leads to catastrophic forgetting and solely relying on pretrained self-attention layers to capture cross-source information is not sufficient. Therefore, we propose a two-stage finetuning method to alleviate the pretrain-finetune discrepancy and introduce a novel MSG ...
|
|
URL: https://dx.doi.org/10.48448/hc3e-sm74 https://underline.io/lecture/25885-transfer-learning-for-sequence-generation-from-single-source-to-multi-source
|
|
BASE
|
|
Hide details
|
|
12 |
Segment, Mask, and Predict: Augmenting Chinese Word Segmentation with Self-Supervision ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
Learning to Selectively Learn for Weakly-supervised Paraphrase Generation ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
SWSR: A Chinese Dataset and Lexicon for Online Sexism Detection ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Analyzing the Limits of Self-Supervision in Handling Bias in Language ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
Statistically significant detection of semantic shifts using contextual word embeddings ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
SWSR: A Chinese Dataset and Lexicon for Online Sexism Detection ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Statistically Significant Detection of Semantic Shifts using Contextual Word Embeddings ...
|
|
|
|
BASE
|
|
Show details
|
|
19 |
Leveraging Word-Formation Knowledge for Chinese Word Sense Disambiguation ...
|
|
|
|
BASE
|
|
Show details
|
|
20 |
SWSR: A Chinese Dataset and Lexicon for Online Sexism Detection ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|