1 |
Embracing Ambiguity: Shifting the Training Target of NLI Models ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Can Question Generation Debias Question Answering Models? A Case Study on Question-Context Lexical Overlap ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
An Evaluation Dataset for Identifying Communicative Functions of Sentences in English Scholarly Papers
|
|
|
|
In: Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020) ; 12th Conference on Language Resources and Evaluation (LREC 2020) ; https://hal.archives-ouvertes.fr/hal-03272825 ; 12th Conference on Language Resources and Evaluation (LREC 2020), May 2020, Marseille, France (2020)
|
|
BASE
|
|
Show details
|
|
4 |
Keyphrase Generation for Scientific Document Retrieval
|
|
|
|
In: The 58th Annual Meeting of the Association for Computational Linguistics (ACL) ; https://hal.archives-ouvertes.fr/hal-02556086 ; The 58th Annual Meeting of the Association for Computational Linguistics (ACL), Jul 2020, Online, United States. ⟨10.18653/v1/2020.acl-main.105⟩ (2020)
|
|
BASE
|
|
Show details
|
|
5 |
A Linguistic Analysis of Visually Grounded Dialogues Based on Spatial Expressions ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Multi-sense Embeddings through a Word Sense Disambiguation Process
|
|
|
|
Abstract:
Natural Language Understanding has seen an increasing number of publications in the last years, especially after robust word embedding models became popular. These models gained a special place in the spotlight when they proved themselves able to capture and represent semantic relations underneath huge amounts of data. Nevertheless, traditional models often fall short in intrinsic issues of linguistics, such as polysemy and homonymy. Multi-sense word embeddings were devised to alleviate these and other problems by representing each word-sense separately, but studies in this area are still in its infancy and much can be explored. We follow this scenario by proposing an unsupervised technique that disambiguates and annotates words by their specific sense, considering their context influence. These are later used to train a word embeddings model to produce a more accurate vector representation. We test our approach in 6 different benchmarks for the word similarity task, showing that our approach can sustain good results and often outperforms current state-of-the-art systems. ; https://deepblue.lib.umich.edu/bitstream/2027.42/145475/3/tacl.pdf ; Description of tacl.pdf : WorkingPaper
|
|
Keyword:
Computer Science; Engineering; word similarity; Word2vec; wordnet; wsd
|
|
URL: https://hdl.handle.net/2027.42/145475
|
|
BASE
|
|
Hide details
|
|
7 |
Leveraging Monolingual Data for Crosslingual Compositional Word Representations ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
Looking for Transliterations in a Trilingual English, French and Japanese Specialised Comparable Corpus
|
|
|
|
In: LREC Workshop on Comparable Corpora (LREC'08) ; Language Resources and Evaluation Conference ; https://hal.archives-ouvertes.fr/hal-00417726 ; Language Resources and Evaluation Conference, May 2008, Marrakech, Morocco. pp.4 (2008)
|
|
BASE
|
|
Show details
|
|
|
|