DE eng

Search in the Catalogues and Directories

Hits 1 – 17 of 17

1
IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and Languages ...
BASE
Show details
2
Improving Word Translation via Two-Stage Contrastive Learning ...
BASE
Show details
3
Prix-LM: Pretraining for Multilingual Knowledge Base Construction ...
BASE
Show details
4
Learning Domain-Specialised Representations for Cross-Lingual Biomedical Entity Linking ...
BASE
Show details
5
MirrorWiC: On Eliciting Word-in-Context Representations from Pretrained Language Models ...
BASE
Show details
6
MirrorWiC: On Eliciting Word-in-Context Representations from Pretrained Language Models ...
Liu, Qianchu; Liu, Fangyu; Collier, Nigel. - : Apollo - University of Cambridge Repository, 2021
BASE
Show details
7
Visually Grounded Reasoning across Languages and Cultures ...
BASE
Show details
8
Learning Domain-Specialised Representations for Cross-Lingual Biomedical Entity Linking ...
BASE
Show details
9
MirrorWiC: On Eliciting Word-in-Context Representations from Pretrained Language Models ...
BASE
Show details
10
Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders ...
Liu, Fangyu; Vulić, I; Korhonen, Anna-Leena. - : Apollo - University of Cambridge Repository, 2021
BASE
Show details
11
Visually Grounded Reasoning across Languages and Cultures ...
BASE
Show details
12
Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders ...
BASE
Show details
13
Visually Grounded Reasoning across Languages and Cultures ...
BASE
Show details
14
Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders ...
BASE
Show details
15
Self-Alignment Pretraining for Biomedical Entity Representations
Liu, Fangyu; Shareghi, Ehsan; Meng, Zaiqiao; Basaldella, Marco; Collier, Nigel. - : Association for Computational Linguistics, 2021. : Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2021
Abstract: Despite the widespread success of self-supervised learning via masked language models (MLM), accurately capturing fine-grained semantic relationships in the biomedical domain remains a challenge. This is of paramount importance for entity-level tasks such as entity linking where the ability to model entity relations (especially synonymy) is pivotal. To address this challenge, we propose SapBERT, a pretraining scheme that self-aligns the representation space of biomedical entities. We design a scalable metric learning framework that can leverage UMLS, a massive collection of biomedical ontologies with 4M+ concepts. In contrast with previous pipeline-based hybrid systems, SapBERT offers an elegant one-model-for-all solution to the problem of medical entity linking (MEL), achieving a new state-of-the-art (SOTA) on six MEL benchmarking datasets. In the scientific domain, we achieve SOTA even without task-specific supervision. With substantial improvement over various domain-specific pretrained MLMs such as BioBERT, SciBERTand and PubMedBERT, our pretraining scheme proves to be both effective and robust. ; FL is supported by Grace & Thomas C.H. Chan Cambridge Scholarship. NC and MB would like to acknowledge funding from Health Data Research UK as part of the National Text Analytics project.
URL: https://doi.org/10.17863/CAM.72095
https://www.repository.cam.ac.uk/handle/1810/324645
BASE
Hide details
16
Upgrading the Newsroom: An Automated Image Selection System for News Articles ...
BASE
Show details
17
Upgrading the Newsroom: An Automated Image Selection System for News Articles
In: http://infoscience.epfl.ch/record/280322 (2020)
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
17
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern