DE eng

Search in the Catalogues and Directories

Hits 1 – 13 of 13

1
FastIF: Scalable Influence Functions for Efficient Model Interpretation and Debugging ...
BASE
Show details
2
ExplaGraphs: An Explanation Graph Generation Task for Structured Commonsense Reasoning ...
BASE
Show details
3
EmailSum: Abstractive Email Thread Summarization ...
BASE
Show details
4
Inducing Transformer’s Compositional Generalization Ability via Auxiliary Sequence Prediction Tasks ...
BASE
Show details
5
I like fish, especially dolphins: Addressing Contradictions in Dialogue Modeling ...
BASE
Show details
6
InfoSurgeon: Cross-Media Fine-grained Information Consistency Checking for Fake News Detection ...
BASE
Show details
7
Continuous Language Generative Flow ...
BASE
Show details
8
ChrEnTranslate: Cherokee-English Machine Translation Demo with Quality Estimation and Corrective Feedback ...
BASE
Show details
9
Integrating Visuospatial, Linguistic, and Commonsense Structure into Story Visualization ...
BASE
Show details
10
Summary-Source Proposition-level Alignment: Task, Datasets and Supervised Baseline ...
BASE
Show details
11
Continual Few-Shot Learning for Text Classification ...
BASE
Show details
12
Finding a Balanced Degree of Automation for Summary Evaluation ...
BASE
Show details
13
Analysis of Tree-Structured Architectures for Code Generation ...
Abstract: Read paper: https://www.aclanthology.org/2021.findings-acl.384 Abstract: Code generation is the task of generating code snippets from input user specifications in natural language. Leveraging the linguistically-motivated hierarchical structure of the input can benefit code generation, especially since the specifications are complex sentences containing multiple variables and operations over various data structures. Moreover, recent advances in Transformer architectures have led to improved performance with tree-to-tree style generation for other seq2seq tasks e.g., machine translation. Hence, we present an empirical analysis of the significance of input parse trees for code generation. We run text-to-tree, linearized tree-to-tree, and structured tree-to-tree models, using constituency-based parse trees as input, where the target is Abstract Syntax Tree (AST) of the code. We evaluate our models on the Python-based code generation dataset CoNaLa and a semantic parsing dataset ATIS. We find that constituency ...
Keyword: Computational Linguistics; Condensed Matter Physics; Deep Learning; Electromagnetism; FOS Physical sciences; Neural Network; Semantics
URL: https://underline.io/lecture/26475-analysis-of-tree-structured-architectures-for-code-generation
https://dx.doi.org/10.48448/yhxr-pm37
BASE
Hide details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
13
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern