DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5...8
Hits 1 – 20 of 156

1
A Self-Paced Reading Study on Processing Constructions with Different Degrees of Compositionality
In: The 35th Annual Conference on Human Sentence Processing ; https://hal.archives-ouvertes.fr/hal-03620795 ; The 35th Annual Conference on Human Sentence Processing, Mar 2022, UC Santa Cruz, United States (2022)
BASE
Show details
2
Probing for the Usage of Grammatical Number ...
BASE
Show details
3
Does BERT really agree ? Fine-grained Analysis of Lexical Dependence on a Syntactic Task ...
BASE
Show details
4
Not all arguments are processed equally: a distributional model of argument complexity [<Journal>]
Chersoni, Emmanuele [Verfasser]; Santus, Enrico [Verfasser]; Lenci, Alessandro [Verfasser].
DNB Subject Category Language
Show details
5
Did the Cat Drink the Coffee? Challenging Transformers with Generalized Event Knowledge
In: Proceedings of *SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics ; SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics ; https://hal.archives-ouvertes.fr/hal-03312774 ; SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics, Aug 2021, Online, France. pp.1-11, &#x27E8;10.18653/v1/2021.starsem-1.1&#x27E9; (2021)
BASE
Show details
6
Not all arguments are processed equally: a distributional model of argument complexity
In: ISSN: 1574-020X ; EISSN: 1574-0218 ; Language Resources and Evaluation ; https://hal.archives-ouvertes.fr/hal-03533181 ; Language Resources and Evaluation, Springer Verlag, 2021, 55 (4), pp.873-900. &#x27E8;10.1007/s10579-021-09533-9&#x27E9; (2021)
BASE
Show details
7
Universal Dependencies 2.9
Zeman, Daniel; Nivre, Joakim; Abrams, Mitchell. - : Universal Dependencies Consortium, 2021
BASE
Show details
8
Universal Dependencies 2.8.1
Zeman, Daniel; Nivre, Joakim; Abrams, Mitchell. - : Universal Dependencies Consortium, 2021
BASE
Show details
9
Universal Dependencies 2.8
Zeman, Daniel; Nivre, Joakim; Abrams, Mitchell. - : Universal Dependencies Consortium, 2021
BASE
Show details
10
Not all arguments are processed equally: a distributional model of argument complexity
In: Springer Netherlands (2021)
BASE
Show details
11
Decoding Word Embeddings with Brain-Based Semantic Features ...
BASE
Show details
12
Did the Cat Drink the Coffee? Challenging Transformers with Generalized Event Knowledge ...
BASE
Show details
13
A comparative evaluation and analysis of three generations of Distributional Semantic Models ...
BASE
Show details
14
Constructional associations trump lexical associations in processing valency coercion
BASE
Show details
15
Common-Sense and Common-Knowledge. How much do Neural Language Models know about the world?
In: http://etd.adm.unipi.it/theses/available/etd-03122021-000321/ (2021)
BASE
Show details
16
I neologismi nelle edizioni del 2010 e del 2021 del dizionario "Zingarelli della lingua italiana"
In: http://etd.adm.unipi.it/theses/available/etd-05102021-214231/ (2021)
BASE
Show details
17
Large-scale Cross-lingual Word Sense Disambiguation using Parallel Corpora
In: http://etd.adm.unipi.it/theses/available/etd-09112021-110903/ (2021)
BASE
Show details
18
Probing the linguistic knowledge of word embeddings: A case study on colexification
In: http://etd.adm.unipi.it/theses/available/etd-06212021-172428/ (2021)
Abstract: In recent years it has become clear that data is the new resource of power and richness. The companies that are able to manage it to extract useful information are the ones that are expected to last and increase their profits. One of the ways in which data is conveyed is through natural language: every day we produce an enormous amount of linguistic data, in written or spoken forms. Through the help of computational resources, we can manage such a big quantity of information, in an automatized and scaled way. Before being able to do this, we need to find ways to allow computers to represent linguistic knowledge. This is indeed a problem, considering that computers do not have linguistic proficiency as we humans do. For words to be processed by machine models, they are often required to have some form of numeric representation that models can use in their calculations. One method that has become influential in recent years is word embeddings, defined as the representation of terms as real-valued vectors such that the words that are closer in the vector space are expected to be similar in meaning. These techniques are very popular and have shown great success in multiple studies, but it is still not clear what kind of linguistic knowledge they do acquire. Also, it is still an open question exactly in which way some of their parameters affect the knowledge they acquire. The present work is motivated by figuring it out. We are going to test the system on a linguistic problem. The issue under examination is colexification: the phenomenon in which, within a language, multiple meanings are expressed by a single word form. One of the reasons why this circumstance happens has been suggested to be a semantic connection between the meanings. It follows that two similar meanings are more expected to be conveyed through a single term with respect to two meanings pertaining to completely different fields. We assume that there is a relationship between distributional similarity and colexification, in the sense in which the former is informative about the latter. This assumption is more concretely based on the results from Xu et al. (2020). We use this study as a general guide to follow in this investigation. We used some word embedding models, specifically, fastText trained with different window sizes, to obtain the cosine similarity values between pairs of words. Subsequently, we performed two predictive tasks, showing how using a predictive model like logistic regression and nothing else than the cosine similarity values between word vectors, it is possible to predict whether a pair of meanings is a highly frequent colexification or whether it is a colexification at all. The results suggest that the linguistic models in use were able to acquire a certain knowledge as regards word meaning. Additionally, changing the model parameter of window size, we inspected what kind of linguistic knowledge the computational models acquired concerning colexification. The project covered the whole working process. We started from the data collecting, understanding and cleaning, to get to the training of the fastText model, and evaluation of the results obtained by the predictive model. Our findings indicate that a narrow window size value is sufficient to allow the linguistic model to acquire a good level of semantic knowledge in a distributional similarity task. Additionally, the parameter of window size, depending on the task, does not always lead to different results in computation. This raises a broader question: in which tasks does window size matter and what does this tell us about these tasks.
Keyword: FILOLOGIA; LETTERATURA E LINGUISTICA
URL: http://etd.adm.unipi.it/theses/available/etd-06212021-172428/
BASE
Hide details
19
"Love is an open door but not a table". Come uomini e macchine 'comprendono' le metafore lessicalizzate e creative.
In: http://etd.adm.unipi.it/theses/available/etd-03242021-214055/ (2021)
BASE
Show details
20
Le interpretazioni del concetto di composizionalita' delle espressioni idiomatiche nella letteratura psicolinguistica
In: http://etd.adm.unipi.it/theses/available/etd-09132021-151853/ (2021)
BASE
Show details

Page: 1 2 3 4 5...8

Catalogues
5
0
7
0
3
0
0
Bibliographies
18
0
0
0
0
0
0
0
8
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
122
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern