DE eng

Search in the Catalogues and Directories

Hits 1 – 15 of 15

1
How Universal is Genre in Universal Dependencies? ...
BASE
Show details
2
Universal Dependencies 2.9
Zeman, Daniel; Nivre, Joakim; Abrams, Mitchell. - : Universal Dependencies Consortium, 2021
BASE
Show details
3
Universal Dependencies 2.8.1
Zeman, Daniel; Nivre, Joakim; Abrams, Mitchell. - : Universal Dependencies Consortium, 2021
BASE
Show details
4
Universal Dependencies 2.8
Zeman, Daniel; Nivre, Joakim; Abrams, Mitchell. - : Universal Dependencies Consortium, 2021
BASE
Show details
5
Parsing with Pretrained Language Models, Multiple Datasets, and Dataset Embeddings ...
BASE
Show details
6
On the Effectiveness of Dataset Embeddings in Mono-lingual,Multi-lingual and Zero-shot Conditions ...
BASE
Show details
7
Genre as Weak Supervision for Cross-lingual Dependency Parsing ...
BASE
Show details
8
We Need to Talk About train-dev-test Splits ...
BASE
Show details
9
Genre as Weak Supervision for Cross-lingual Dependency Parsing ...
BASE
Show details
10
DaN+: Danish Nested Named Entities and Lexical Normalization ...
BASE
Show details
11
From Masked Language Modeling to Translation: Non-English Auxiliary Tasks Improve Zero-shot Spoken Language Understanding ...
Abstract: The lack of publicly available evaluation data for low-resource languages limits progress in Spoken Language Understanding (SLU). As key tasks like intent classification and slot filling require abundant training data, it is desirable to reuse existing data in high-resource languages to develop models for low-resource scenarios. We introduce xSID, a new benchmark for cross-lingual Slot and Intent Detection in 13 languages from 6 language families, including a very low-resource dialect. To tackle the challenge, we propose a joint learning approach, with English SLU training data and non-English auxiliary tasks from raw text, syntax and translation for transfer. We study two setups which differ by type and language coverage of the pre-trained embeddings. Our results show that jointly learning the main tasks with masked language modeling is effective for slots, while machine translation transfer works best for intent classification. ... : To appear in the proceedings of NAACL 2021 ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences
URL: https://arxiv.org/abs/2105.07316
https://dx.doi.org/10.48550/arxiv.2105.07316
BASE
Hide details
12
From Masked-Language Modeling to Translation: Non-English Auxiliary Tasks Improve Zero-shot Spoken Language Understanding ...
NAACL 2021 2021; van der Goot, Rob. - : Underline Science Inc., 2021
BASE
Show details
13
Lexical Normalization for Code-switched Data and its Effect on POS-tagging ...
BASE
Show details
14
Fair Is Better than Sensational: Man Is to Doctor as Woman Is to Doctor
In: Computational Linguistics, Vol 46, Iss 2, Pp 487-497 (2020) (2020)
BASE
Show details
15
Bleaching Text: Abstract Features for Cross-lingual Gender Prediction ...
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
15
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern