DE eng

Search in the Catalogues and Directories

Page: 1 2 3
Hits 1 – 20 of 52

1
MM-COVID: A Multilingual and Multimodal Data Repository for Combating COVID-19 Disinformation ...
Li, Yichuan; Jiang, Bohan; Shu, Kai. - : Zenodo, 2021
BASE
Show details
2
MM-COVID: A Multilingual and Multimodal Data Repository for Combating COVID-19 Disinformation ...
Li, Yichuan; Jiang, Bohan; Shu, Kai. - : Zenodo, 2021
BASE
Show details
3
Neuro-Cognitive Differences in Semantic Processing Between Native Speakers and Proficient Learners of Mandarin Chinese
In: Front Psychol (2021)
BASE
Show details
4
MM-COVID: A Multilingual and Multimodal Data Repository for Combating COVID-19 Disinformation ...
Li, Yichuan; Jiang, Bohan; Shu, Kai. - : arXiv, 2020
BASE
Show details
5
Hierarchical Propagation Networks for Fake News Detection: Investigation and Exploitation
In: Proceedings of the International AAAI Conference on Web and Social Media; Vol. 14 (2020): Fourteenth International AAAI Conference on Web and Social Media; 626-637 ; 2334-0770 ; 2162-3449 (2020)
BASE
Show details
6
Computational Modeling of Affixoid Behavior in Chinese Morphology ...
BASE
Show details
7
The Secret to Popular Chinese Web Novels: A Corpus-Driven Study
Lin, Yi-Ju; Hsieh, Shu-Kai. - : Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2019. : OASIcs - OpenAccess Series in Informatics. 2nd Conference on Language, Data and Knowledge (LDK 2019), 2019
BASE
Show details
8
A realistic and robust model for Chinese word segmentation ...
BASE
Show details
9
The Secret to Popular Chinese Web Novels: A Corpus-Driven Study ...
Lin, Yi-Ju; Hsieh, Shu-Kai. - : Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik GmbH, Wadern/Saarbruecken, Germany, 2019
BASE
Show details
10
Mandarin Chinese words and parts of speech : corpus-based foundational studies
Huang, Chu-Ren; Chen, Keh-Jiann; Hsieh, Shu-Kai. - New York : Routledge, 2017
UB Frankfurt Linguistik
Show details
11
Mandarin Chinese words and parts of speech : a corpus-based study
Hsieh, Shu-Kai; Huang, Chu-Ren; Chen, Keh-Jiann. - New York : Routledge, 2017
BLLDB
UB Frankfurt Linguistik
Show details
12
Sentiment detection in micro-blogs using unsupervised chunk extraction
In: Lingua Sinica ; https://hal.archives-ouvertes.fr/hal-01573567 ; Lingua Sinica, 2016, 2 (1), ⟨10.1186/s40655-015-0010-8⟩ ; https://link.springer.com/article/10.1186/s40655-015-0010-8 (2016)
BASE
Show details
13
Mismatches in verb complements: a corpus-based study of the complement coercion operation in Chinese
In: Corpus linguistics and linguistic theory. - Berlin ; New York : Mouton de Gruyter 12 (2016) 2, 301-324
BLLDB
Show details
14
LMF and its implementation in some Asian languages
In: LMF — Lexical Markup Framework (London, 2013), p. 119-132
MPI für Psycholinguistik
Show details
15
CWIKIN: a wiki that helps quicken the development of Chinese Wordnet
In: Lexicography and Dictionaries in the Information Age. Proceedings of the ASIALEX 8th International Conference 2013. Bali, Indonesia, 20 - 22 August 2013 (2013), 91-98
IDS OBELEX meta
Show details
16
Observing Features of PTT Neologisms: A Corpus-driven Study with N-gram Model
In: ROCLING ; https://hal.archives-ouvertes.fr/hal-01231908 ; ROCLING, 2013, Unknown, Unknown Region (2013)
BASE
Show details
17
Regular polysemy: A distributional semantic approach
In: http://etd.adm.unipi.it/theses/available/etd-10172013-181141/ (2013)
Abstract: Polysemy and Homonymy are two different kinds of lexical ambiguity. The main difference between them is that plysemous words can share the same alternation - where alternation is the senses a word can have - and homonymous words have idiosyncratic alternations. This means that, for instance, a word such as lamb, whose alternation is given by the senses food and animal, is a polysemous word, given that a number of other words share this very alternation food-animal, e.g. the word fish. On the other hand, a word such as ball, whose possible senses are of artifact and event, is homonymous, given that no other words share the alternation artifact-event. Furthermore, polysemy highlights two different aspects of the same lexical item, where homonymy describes the fact that the same lexical unit is used to represent two different and completely unrelated word-meanings. These two kinds of lexical ambiguity have even been an issue in lexicography, given that there is no clear rule used to distinguish between polysemous and homonymous words. As a matter of principle, we would expect to have different lexical entries for homonymous words, but only one lexical entry with internal differentiation for polysemous words. An important work needs to be mentioned here, that is the Generative Lexicon (Pustejovsky, 1995). This is a theoretical framework for lexical semantics which focuses on the compositionality of word meanings. In regard of polysemy and homonymy, GL provides a clear explanation of how it is possible to understand the appropriate sense of a word in a specific sentence. This is done by looking at the context in which the word appears, and, specifically, looking at the type of argument required by the predication. These phenomena have even been of interest among computational linguists, insomuch as they have tried to implement some models able to predict the alter- nations polysemous words can have. One of the most important work concerning this matter is the one made by Boleda, Pado, Utt (2012), in which a model is proposed that is able to predict words having a particular alternation of senses. This means that, for instance, given an alternation such as food-animal, they can predict the words having that alternation. Another relevant work has been made by Rumshisky, Grinberg, Pustejovsky (2007), in which, using some syntac- tic information, they have managed to detect the senses a polysemous word can have. For instance, given the polysemous word lunch, whose sense alternation is food-event, they first extracted all of the verbs whose object can be the word lunch. This lead to the extraction of verbs requiring an argument expressing the sense of food (the verb cook can be extracted as verb whose object can be lunch), and verbs requiring the argument of event (again, lunch can be object of the verb to attend). Finally, they extracted all of the objects that those verbs can have (for instance, pasta can be object of the verb cook, and conference can be object of the verb to attend). By doing so, they can get to the creation of two clusters, each one of which represents words similar to one of the senses of the ambiguous word. These two models are totally different in the way they are implemented, even though they are grounded in one of the most important theories used in compu- tational semantics: the Distributional Hypothesis. This theory can be stated as “words with similar meaning tend to occur in similar contexts”. To implement this theory, it is necessary to describe the contexts in a computational valid way, so that it will be possible to get a degree of similarity between two words by only looking at their contexts. The mathematical model used is the Vector, in which it is possible to store the frequency of a word in all its contexts. The model using vectors to describe the distributional properties of words is called Vector Space Model, which can be also called Distributional Model. In this work, our goal is to automatically detect the alternation a word has. To do so, we have first considered the possibility of using a Sense Discrimina- tion procedure proposed by Schu ̈tze. In this method, he proposes to create a Distributional Model and use it to create context vectors and sense vectors. A context vector is given by the sum of the vectors of the words found in a context in which an ambiguous word appears, so there will be as many context vectors as there are occurrences of the target word. Once we have the context vectors, it is possible to get the sense vectors by simply clustering them together. The ideas is that two context vectors representing the same sense of the ambiguous word will be similar, and so clustered together. The centroid, that is the vector given by the sum of the context vectors clustered together, will be the sense vector. This means that there will be as many sense vectors as there are senses of an ambiguous word. Our idea was to use this work and go a step further in the creation of the alternation, but this was not possible for many reasons. We have developed a new method to create context vectors, which is based on the idea that the understanding of an ambiguous word is given by some elements in the sentence in which the word appears. Our model is able to carry out two tasks: 1) it can predict the alternation of a regular polysemous word; 2) it can distinguish whether the lexical ambiguity of a word is homonymy or regular polysemy.
Keyword: FILOLOGIA; LETTERATURA E LINGUISTICA
URL: http://etd.adm.unipi.it/theses/available/etd-10172013-181141/
BASE
Hide details
18
Towards an Automatic Measurement of Verbal Lexicon Acquisition: The Case for a Young Children-versus-Adults Classification in French and Mandarin
In: PACLIC 24 Proceedings ; PACLIC 24 : Workshop on Model and Measurement of Meaning (M3) ; https://hal.archives-ouvertes.fr/hal-00992078 ; PACLIC 24 : Workshop on Model and Measurement of Meaning (M3), 2010, Sendai, Japan. pp.809-818 (2010)
BASE
Show details
19
Assessing Text Readability Using Hierarchical Lexical Relations Retrieved from WordNet
In: http://www.aclclp.org.tw/clclp/v14n1/v14n1a3.pdf (2009)
BASE
Show details
20
Bridging the Gap between Graph Modeling and Developmental Psycholinguistics: An Experiment on Measuring Lexical Proximity in Chinese Semantic Space
In: Proceding of The 23rd Pacific Asia Conference on Language, Information and Computation ; 23rd Pacific Asia Conference on Language, Information and Computation ; https://hal.archives-ouvertes.fr/hal-00992105 ; 23rd Pacific Asia Conference on Language, Information and Computation, 2009, Hong Kong SAR China. pp.118--130 (2009)
BASE
Show details

Page: 1 2 3

Catalogues
2
0
0
0
3
0
0
Bibliographies
3
0
0
0
0
0
1
0
1
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
43
0
0
1
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern