1 |
EnvEdit: Environment Editing for Vision-and-Language Navigation ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
How can NLP Help Revitalize Endangered Languages? A Case Study and Roadmap for the Cherokee Language ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
multiPRover: Generating Multiple Proofs for Improved Interpretability in Rule Reasoning ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
FastIF: Scalable Influence Functions for Efficient Model Interpretation and Debugging ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
ExplaGraphs: An Explanation Graph Generation Task for Structured Commonsense Reasoning ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Integrating Visuospatial, Linguistic and Commonsense Structure into Story Visualization ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Inducing Transformer’s Compositional Generalization Ability via Auxiliary Sequence Prediction Tasks ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
I like fish, especially dolphins: Addressing Contradictions in Dialogue Modeling ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
InfoSurgeon: Cross-Media Fine-grained Information Consistency Checking for Fake News Detection ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
multiPRover: Generating Multiple Proofs for Improved Interpretability in Rule Reasoning ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
ChrEnTranslate: Cherokee-English Machine Translation Demo with Quality Estimation and Corrective Feedback ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
Integrating Visuospatial, Linguistic, and Commonsense Structure into Story Visualization ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
Summary-Source Proposition-level Alignment: Task, Datasets and Supervised Baseline ...
|
|
|
|
BASE
|
|
Show details
|
|
19 |
Finding a Balanced Degree of Automation for Summary Evaluation ...
|
|
|
|
BASE
|
|
Show details
|
|
20 |
Analyzing the Limits of Self-Supervision in Handling Bias in Language ...
|
|
|
|
Abstract:
Prompting inputs with natural language task descriptions has emerged as a popular mechanism to elicit reasonably accurate outputs from large-scale generative language models with little to no in-context supervision. This also helps gain insight into how well language models capture the semantics of a wide range of downstream tasks purely from self-supervised pre-training on massive corpora of unlabeled text. Such models have naturally also been exposed to a lot of undesirable content like racist and sexist language and there is limited work on awareness of models along these dimensions. In this paper, we define and comprehensively evaluate how well such language models capture the semantics of four tasks for bias: diagnosis, identification, extraction and rephrasing. We define three broad classes of task descriptions for these tasks: statement, question, and completion, with numerous lexical variants within each class. We study the efficacy of prompting for each task using these classes and the null task ... : 16 pages, 1 figure ...
|
|
Keyword:
Artificial Intelligence cs.AI; Computation and Language cs.CL; FOS Computer and information sciences
|
|
URL: https://dx.doi.org/10.48550/arxiv.2112.08637 https://arxiv.org/abs/2112.08637
|
|
BASE
|
|
Hide details
|
|
|
|