61 |
SEAGLE: A platform for comparative evaluation of semantic encoders for information retrieval
|
|
|
|
BASE
|
|
Show details
|
|
63 |
Specializing distributional vectors of all words for lexical entailment
|
|
|
|
BASE
|
|
Show details
|
|
64 |
How to (properly) evaluate cross-lingual word embeddings: On strong baselines, comparative analyses, and some misconceptions
|
|
|
|
BASE
|
|
Show details
|
|
65 |
Cross-lingual semantic specialization via lexical relation induction
|
|
|
|
BASE
|
|
Show details
|
|
66 |
Generalized tuning of distributional word vectors for monolingual and cross-lingual lexical entailment
|
|
|
|
BASE
|
|
Show details
|
|
67 |
SenZi: A sentiment analysis lexicon for the latinised Arabic (Arabizi)
|
|
|
|
BASE
|
|
Show details
|
|
68 |
Informing unsupervised pretraining with external linguistic knowledge
|
|
|
|
BASE
|
|
Show details
|
|
69 |
Do we really need fully unsupervised cross-lingual embeddings?
|
|
|
|
BASE
|
|
Show details
|
|
70 |
Are we consistently biased? Multidimensional analysis of biases in distributional word vectors
|
|
|
|
BASE
|
|
Show details
|
|
71 |
Unsupervised Cross-Lingual Information Retrieval using Monolingual Data Only ...
|
|
|
|
BASE
|
|
Show details
|
|
72 |
Unsupervised Cross-Lingual Information Retrieval Using Monolingual Data Only ...
|
|
|
|
BASE
|
|
Show details
|
|
73 |
Adversarial Propagation and Zero-Shot Cross-Lingual Transfer of Word Vector Specialization ...
|
|
|
|
BASE
|
|
Show details
|
|
74 |
Post-Specialisation: Retrofitting Vectors of Words Unseen in Lexical Resources ...
|
|
|
|
BASE
|
|
Show details
|
|
75 |
A Resource-Light Method for Cross-Lingual Semantic Textual Similarity ...
|
|
|
|
BASE
|
|
Show details
|
|
76 |
Post-Specialisation: Retrofitting Vectors of Words Unseen in Lexical Resources ...
|
|
|
|
BASE
|
|
Show details
|
|
77 |
Unsupervised Cross-Lingual Information Retrieval Using Monolingual Data Only
|
|
|
|
Abstract:
We propose a fully unsupervised framework for ad-hoc cross-lingual information retrieval (CLIR) which requires no bilingual data at all. The framework leverages shared cross-lingual word embedding spaces in which terms, queries, and documents can be represented, irrespective of their actual language. The shared embedding spaces are induced solely on the basis of monolingual corpora in two languages through an iterative process based on adversarial neural networks. Our experiments on the standard CLEF CLIR collections for three language pairs of varying degrees of language similarity (English-Dutch/Italian/Finnish) demonstrate the usefulness of the proposed fully unsupervised approach. Our CLIR models with unsupervised cross-lingual embeddings outperform baselines that utilize cross-lingual embeddings induced relying on word-level and document-level alignments. We then demonstrate that further improvements can be achieved by unsupervised ensemble CLIR models. We believe that the proposed framework is the first step towards development of effective CLIR models for language pairs and domains where parallel data are scarce or non-existent.
|
|
Keyword:
cross-lingual vector spaces; Unsupervised cross-lingual IR
|
|
URL: https://www.repository.cam.ac.uk/handle/1810/279400 https://doi.org/10.17863/CAM.26775
|
|
BASE
|
|
Hide details
|
|
78 |
ArguminSci: a tool for analyzing argumentation and rhetorical aspects in scientific writing
|
|
|
|
BASE
|
|
Show details
|
|
80 |
Investigating the role of argumentation in the rhetorical analysis of scientific publications with neural multi-task learning models
|
|
|
|
BASE
|
|
Show details
|
|
|
|