1 |
Improving Pre-trained Language Models with Syntactic Dependency Prediction Task for Chinese Semantic Error Recognition ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
ExpMRC: explainability evaluation for machine reading comprehension
|
|
|
|
In: Heliyon (2022)
|
|
BASE
|
|
Show details
|
|
3 |
Multilingual multi-aspect explainability analyses on machine reading comprehension models
|
|
|
|
In: iScience (2022)
|
|
BASE
|
|
Show details
|
|
4 |
Multilingual Multi-Aspect Explainability Analyses on Machine Reading Comprehension Models ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Allocating Large Vocabulary Capacity for Cross-lingual Language Model Pre-training ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Chase: A Large-Scale and Pragmatic Chinese Dataset for Cross-Database Context-Dependent Text-to-SQL ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
GL-GIN: Fast and Accurate Non-Autoregressive Model for Joint Multiple Intent Detection and Slot Filling ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
A Closer Look into the Robustness of Neural Dependency Parsers Using Better Adversarial Examples ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Learning to Bridge Metric Spaces: Few-shot Joint Learning of Intent Detection and Slot Filling ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
Neural Stylistic Response Generation with Disentangled Latent Variables ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
Language learners' enjoyment and emotion regulation in online collaborative learning
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Canonicalizing Open Knowledge Bases with Multi-Layered Meta-Graph Neural Network ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
TableGPT: Few-shot Table-to-Text Generation with Table Structure Reconstruction and Content Matching ...
|
|
|
|
Abstract:
Although neural table-to-text models have achieved remarkable progress with the help of large-scale datasets, they suffer insufficient learning problem with limited training data. Recently, pre-trained language models show potential in few-shot learning with linguistic knowledge learnt from pretraining on large-scale corpus. However, benefiting table-to-text generation in few-shot setting with the powerful pretrained language model faces three challenges, including (1) the gap between the task's structured input and the natural language input for pretraining language model. (2) The lack of modeling for table structure and (3) improving text \textit{fidelity} with less incorrect expressions that are contradicting to the table. To address aforementioned problems, we propose TableGPT for table-to-text generation. At first, we utilize table transformation module with template to rewrite structured table in natural language as input for GPT-2. In addition, we exploit multi-task learning with two auxiliary tasks ...
|
|
Keyword:
Computer and Information Science; Natural Language Processing; Neural Network
|
|
URL: https://underline.io/lecture/6210-tablegpt-few-shot-table-to-text-generation-with-table-structure-reconstruction-and-content-matching https://dx.doi.org/10.48448/hk4q-gq78
|
|
BASE
|
|
Hide details
|
|
16 |
N-LTP: An Open-source Neural Language Technology Platform for Chinese ...
|
|
|
|
BASE
|
|
Show details
|
|
20 |
Towards Better UD Parsing: Deep Contextualized Word Embeddings, Ensemble, and Treebank Concatenation ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|