1 |
IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and Languages ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Modelling Latent Translations for Cross-Lingual Transfer ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Minimax and Neyman–Pearson Meta-Learning for Outlier Languages ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Mind the Context: The Impact of Contextualization in Neural Module Networks for Grounding Visual Referring Expressions ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Back-Training excels Self-Training at Unsupervised Domain Adaptation of Question Generation and Passage Retrieval ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Minimax and Neyman–Pearson Meta-Learning for Outlier Languages ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Visually Grounded Reasoning across Languages and Cultures ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
Visually Grounded Reasoning across Languages and Cultures ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Visually Grounded Reasoning across Languages and Cultures ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Words aren't enough, their order matters: On the Robustness of Grounding Visual Referring Expressions ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
MeDAL ...
|
|
|
|
Abstract:
Me dical D ataset for A bbreviation Disambiguation for Natural L anguage Understanding (MeDAL) is a large medical text dataset curated for abbreviation disambiguation, designed for natural language understanding pre-training in the medical domain. It was published at the ClinicalNLP workshop at EMNLP. 📜 Paper 💻 Code 💾 Dataset (Kaggle) 💽 Dataset (Zenodo) Running the code Coming soon! Citation Download the bibtex here, or copy the text below: @inproceedings{wen-etal-2020-medal, title = "{M}e{DAL}: Medical Abbreviation Disambiguation Dataset for Natural Language Understanding Pretraining", author = "Wen, Zhi and Lu, Xing Han and Reddy, Siva", booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.15", pages = "130--135", } License, Terms and Conditions The ELECTRA model is licensed under Apache 2.0. The license for the ...
|
|
Keyword:
deep learning; health science; natural language understanding
|
|
URL: https://dx.doi.org/10.5281/zenodo.4265633 https://zenodo.org/record/4265633
|
|
BASE
|
|
Hide details
|
|
19 |
CoQA: A Conversational Question Answering Challenge
|
|
|
|
In: Transactions of the Association for Computational Linguistics, Vol 7, Pp 249-266 (2019) (2019)
|
|
BASE
|
|
Show details
|
|
20 |
Universal Dependencies 2.2
|
|
|
|
In: https://hal.archives-ouvertes.fr/hal-01930733 ; 2018 (2018)
|
|
BASE
|
|
Show details
|
|
|
|