DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5...15
Hits 1 – 20 of 289

1
DWUG ES: Diachronic Word Usage Graphs for Spanish ...
BASE
Show details
2
DWUG ES: Diachronic Word Usage Graphs for Spanish ...
BASE
Show details
3
DWUG ES: Diachronic Word Usage Graphs for Spanish ...
BASE
Show details
4
DWUG ES: Diachronic Word Usage Graphs for Spanish ...
BASE
Show details
5
DWUG ES: Diachronic Word Usage Graphs for Spanish ...
BASE
Show details
6
DWUG ES: Diachronic Word Usage Graphs for Spanish ...
BASE
Show details
7
Analyzing COVID-19 Medical Papers Using Artificial Intelligence: Insights for Researchers and Medical Professionals
In: Big Data and Cognitive Computing; Volume 6; Issue 1; Pages: 4 (2022)
BASE
Show details
8
Towards a theoretical understanding of word and relation representation
Allen, Carl S.. - : The University of Edinburgh, 2022
Abstract: Representing words by vectors of numbers, known as word embeddings, enables computational reasoning over words and is foundational to automating tasks involving natural language. For example, by crafting word embeddings so that similar words have similar valued embeddings, often thought of as nearby points in a semantic space, word similarity can be readily assessed using a variety of metrics. In contrast, judging whether two words are similar from more common representations, such as their English spelling, is often impossible (e.g. cat/feline); and to predetermine and store all similarities between all words is prohibitively time-consuming, memory intensive and subjective. As a succinct means of representing words – or, perhaps, the concepts that words themselves represent – word embeddings also relate to information theory and cognitive science. Numerous algorithms have been proposed to learn word embeddings from different data sources, such as large text corpora, document collections and “knowledge graphs” – compilations of facts in the form hsubject entity, relation, object entityi, e.g. hEdinburgh, capital of, Scotlandi. The broad aim of these algorithms is to capture information from the data in the components of each word embedding that is useful for a certain task or suite of tasks, such as detecting sentiment in text, identifying the topic of a document, or predicting whether a given fact is true or false. In this thesis, we focus on word embeddings learned from text corpora and knowledge graphs. Several well-known algorithms learn word embeddings from text on an unsupervised (or, more recently, self-supervised) basis by learning to predict context words that occur around each word, e.g. word2vec (Mikolov et al., 2013a,b) and GloVe (Pennington et al., 2014). The parameters of word embeddings learned in this way are known to reflect word co-occurrence statistics, but how they capture semantic meaning has been largely unclear. Knowledge graph representation models learn representations both of entities, which include words, people, places, etc., and binary relations between them. Representations are typically learned by training the model to predict known true facts of the knowledge graph in a supervised manner. Despite steady improvements in the accuracy with which knowledge graph representation models are able to predict facts, both seen and unseen during training, little is understood of the latent structure that allows them to do so. This limited understanding of how latent semantic structure is encoded in the geometry of word embeddings and knowledge graph representations makes a principled direction for improving their performance, reliability or interpretability unclear. To address this: 1. we theoretically justify the empirical observation that particular geometric relationships between word embeddings learned by algorithms such as word2vec and GloVe correspond to semantic relations between words; and 2. we extend this correspondence between semantics and geometry to the entities and relations of knowledge graphs, providing a model for the latent structure of knowledge graph representation linked to that of word embeddings. We first give a probabilistic explanation for why word embeddings of analogies – phrases of the form “man is to king as woman is to queen” – often appear to approximate a parallelogram. This “analogy phenomenon” has generated much intrigue since word embeddings are not trained to achieve it, yet it allows many analogies to be “solved” simply by adding and subtracting their embeddings, e.g. wqueen ≈ wking − wman + wwoman. Similar probabilistic rationale is given to explain how semantic relations such as similarity and paraphrase are encoded in the relative geometry of word embeddings. Lastly, we extend this correspondence, between semantics and embedding geometry, to the specific relations of knowledge graphs. We derive a hierarchical categorisation of relation types and, for each type, identify the notional geometric relationship between word embeddings of related entities. This gives a theoretical basis for relation representation against which we can contrast a range of knowledge graph representation models. By analysing properties of their representations and their relation-by-relation performance, we show that the closer the agreement between how a model represents a relation and our theoretically-inspired basis, the better the model performs. Indeed, a knowledge graph representation model inspired by this research achieved state-of-the-art performance (Balaˇzevi´c et al., 2019b).
Keyword: automating language tasks; embeddings; knowledge graphs; semantic properties; semantic relationships
URL: https://hdl.handle.net/1842/38601
https://doi.org/10.7488/era/1864
BASE
Hide details
9
Representation of Explanations of Possibilistic Inference Decisions
In: Symbolic and Quantitative Approaches to Reasoning with Uncertainty ; ECSQARU 2021: European Conference on Symbolic and Quantitative Approaches with Uncertainty ; https://hal-cea.archives-ouvertes.fr/cea-03406884 ; ECSQARU 2021: European Conference on Symbolic and Quantitative Approaches with Uncertainty, Sep 2021, Prague, Czech Republic. pp.513-527, ⟨10.1007/978-3-030-86772-0_37⟩ (2021)
BASE
Show details
10
Injecting Inductive Biases into Distributed Representations of Text ...
Prokhorov, Victor. - : Apollo - University of Cambridge Repository, 2021
BASE
Show details
11
Querying knowledge graphs in natural language ...
BASE
Show details
12
Graphs, Computation, and Language ...
Ustalov, Dmitry. - : Zenodo, 2021
BASE
Show details
13
DWUG SV: Diachronic Word Usage Graphs for Swedish ...
BASE
Show details
14
DWUG SV: Diachronic Word Usage Graphs for Swedish ...
BASE
Show details
15
DWUG DE: Diachronic Word Usage Graphs for German ...
BASE
Show details
16
RefWUG: Diachronic Reference Word Usage Graphs for German ...
BASE
Show details
17
DWUG EN: Diachronic Word Usage Graphs for English ...
BASE
Show details
18
Αναγνώριση νοηματικής γλώσσας με τεχνικές βαθιάς μηχανικής μάθησης ... : Deep learning based sign language recognition ...
Parelli, Maria. - : National Technological University of Athens, 2021
BASE
Show details
19
Graphs, Computation, and Language ...
Ustalov, Dmitry. - : Zenodo, 2021
BASE
Show details
20
RefWUG: Diachronic Reference Word Usage Graphs for German ...
BASE
Show details

Page: 1 2 3 4 5...15

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
288
1
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern