DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 1 – 20 of 38

1
SyGNS: A Systematic Generalization Testbed Based on Natural Language Semantics ...
BASE
Show details
2
Summarize-then-Answer: Generating Concise Explanations for Multi-hop Reading Comprehension ...
BASE
Show details
3
SHAPE: Shifted Absolute Position Embedding for Transformers ...
BASE
Show details
4
Incorporating Residual and Normalization Layers into Analysis of Masked Language Models ...
BASE
Show details
5
Pseudo Zero Pronoun Resolution Improves Zero Anaphora Resolution ...
BASE
Show details
6
Exploring Methods for Generating Feedback Comments for Writing Learning ...
BASE
Show details
7
Transformer-based Lexically Constrained Headline Generation ...
BASE
Show details
8
Transformer-based Lexically Constrained Headline Generation ...
BASE
Show details
9
Topicalization in Language Models: A Case Study on Japanese ...
BASE
Show details
10
Lower Perplexity is Not Always Human-Like ...
BASE
Show details
11
Lower Perplexity is Not Always Human-Like ...
BASE
Show details
12
An Empirical Study of Contextual Data Augmentation for Japanese Zero Anaphora Resolution ...
BASE
Show details
13
PheMT: A Phenomenon-wise Dataset for Machine Translation Robustness on User-Generated Contents ...
Fujii, Ryo; Mita, Masato; Abe, Kaori. - : arXiv, 2020
BASE
Show details
14
Seeing the world through text: Evaluating image descriptions for commonsense reasoning in machine reading comprehension ...
Abstract: Despite recent achievements in natural language understanding, reasoning over commonsense knowledge still represents a big challenge to AI systems. As the name suggests, common sense is related to perception and as such, humans derive it from experience rather than from literary education. Recent works in the NLP and the computer vision field have made the effort of making such knowledge explicit using written language and visual inputs, respectively. Our premise is that the latter source fits better with the characteristics of commonsense acquisition. In this work, we explore to what extent the descriptions of real-world scenes are sufficient to learn common sense about different daily situations, drawing upon visual information to answer script knowledge questions. ...
Keyword: Information and Knowledge Engineering; Intelligent System; Natural Language Processing; Neural Network
URL: https://dx.doi.org/10.48448/ykdt-fd88
https://underline.io/lecture/6568-seeing-the-world-through-text-evaluating-image-descriptions-for-commonsense-reasoning-in-machine-reading-comprehension
BASE
Hide details
15
Language Models as an Alternative Evaluator of Word Order Hypotheses: A Case Study in Japanese ...
BASE
Show details
16
Encoder-Decoder Models Can Benefit from Pre-trained Masked Language Models in Grammatical Error Correction ...
BASE
Show details
17
Attention is Not Only a Weight: Analyzing Transformers with Vector Norms ...
BASE
Show details
18
Filtering Noisy Dialogue Corpora by Connectivity and Content Relatedness ...
Akama, Reina; Yokoi, Sho; Suzuki, Jun. - : arXiv, 2020
BASE
Show details
19
Modeling Event Salience in Narratives via Barthes' Cardinal Functions ...
BASE
Show details
20
Do Neural Models Learn Systematicity of Monotonicity Inference in Natural Language? ...
BASE
Show details

Page: 1 2

Catalogues
2
0
1
0
2
0
0
Bibliographies
3
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
32
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern