DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5 6...72
Hits 21 – 40 of 1.423

21
HIT - A Hierarchically Fused Deep Attention Network for Robust Code-mixed Language Representation ...
BASE
Show details
22
Minimally-Supervised Morphological Segmentation using Adaptor Grammars with Linguistic Priors ...
BASE
Show details
23
Bridging Subword Gaps in Pretrain-Finetune Paradigm for Natural Language Generation ...
BASE
Show details
24
LearnDA: Learnable Knowledge-Guided Data Augmentation for Event Causality Identification ...
BASE
Show details
25
Quotation Recommendation and Interpretation Based on Transformation from Queries to Quotations ...
BASE
Show details
26
How Did This Get Funded?! Automatically Identifying Quirky Scientific Achievements ...
BASE
Show details
27
Minimax and Neyman–Pearson Meta-Learning for Outlier Languages ...
BASE
Show details
28
CLINE: Contrastive Learning with Semantic Negative Examples for Natural Language Understanding ...
BASE
Show details
29
Towards Protecting Vital Healthcare Programs by Extracting Actionable Knowledge from Policy ...
BASE
Show details
30
DYPLOC: Dynamic Planning of Content Using Mixed Language Models for Text Generation ...
BASE
Show details
31
Automated Concatenation of Embeddings for Structured Prediction ...
BASE
Show details
32
QASR: QCRI Aljazeera Speech Resource A Large Scale Annotated Arabic Speech Corpus ...
BASE
Show details
33
Code Generation from Natural Language with Less Prior Knowledge and More Monolingual Data ...
Abstract: Read paper: https://www.aclanthology.org/2021.acl-short.98 Abstract: Training datasets for semantic parsing are typically small due to the higher expertise required for annotation than most other NLP tasks. As a result, models for this application usually need additional prior knowledge to be built into the architecture or algorithm. The increased dependency on human experts hinders automation and raises the development and maintenance costs in practice. This work investigates whether a generic transformer-based seq2seq model can achieve competitive performance with minimal code-generation-specific inductive bias design. By exploiting a relatively sizeable monolingual corpus of the target programming language, which is cheap to mine from the web, we achieved 81.03% exact match accuracy on Django and 32.57 BLEU score on CoNaLa. Both are SOTA to the best of our knowledge. This positive evidence highlights a potentially easier path toward building accurate semantic parsers in practice. ...
Keyword: Computational Linguistics; Condensed Matter Physics; Deep Learning; Electromagnetism; FOS Physical sciences; Information and Knowledge Engineering; Neural Network; Semantics
URL: https://dx.doi.org/10.48448/xh3g-gp08
https://underline.io/lecture/25817-code-generation-from-natural-language-with-less-prior-knowledge-and-more-monolingual-data
BASE
Hide details
34
On the Distribution, Sparsity, and Inference-time Quantization of Attention Values in Transformers ...
BASE
Show details
35
Learning Disentangled Latent Topics for Twitter Rumour Veracity Classification ...
BASE
Show details
36
Sequence Models for Computational Etymology of Borrowings ...
BASE
Show details
37
Scaling Within Document Coreference to Long Texts ...
BASE
Show details
38
How to Split: the Effect of Word Segmentation on Gender Bias in Speech Translation ...
BASE
Show details
39
Prefix-Tuning: Optimizing Continuous Prompts for Generation ...
BASE
Show details
40
Chase: A Large-Scale and Pragmatic Chinese Dataset for Cross-Database Context-Dependent Text-to-SQL ...
BASE
Show details

Page: 1 2 3 4 5 6...72

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
1.423
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern