Page: 1 2 3 4 5 6 7 8 9... 15
81 |
Inferring Aspect-Specific Opinion Structure in Product Reviews ...
|
|
Carter, David. - : Université d'Ottawa / University of Ottawa, 2015
|
|
BASE
|
|
Show details
|
|
82 |
Corpus-supported academic writing : how can technology help? ...
|
|
|
|
BASE
|
|
Show details
|
|
83 |
Learning semantic types and relations from text ...
|
|
Hovy, Dirk. - : University of Southern California Digital Library (USC.DL), 2015
|
|
BASE
|
|
Show details
|
|
84 |
Language Learning Tasks and Automatic Analysis of Learner Language: Connecting FLTL and NLP design of ICALL materials supporting use in real-life instruction ...
|
|
|
|
BASE
|
|
Show details
|
|
85 |
Word Sense Disambiguation with GermaNet ... : Disambiguierung von Wortbedeutungen mit GermaNet ...
|
|
|
|
BASE
|
|
Show details
|
|
86 |
Discovering and disambiguating named entities in text ... : Erkennung und Disambiguierung von Entitäten in Texten ...
|
|
|
|
BASE
|
|
Show details
|
|
87 |
Form and meaning in dialog-based computer-assisted language learning ... : Form und Bedeutung im dialog-basierten computer-unterstützten Sprachenlernen ...
|
|
|
|
BASE
|
|
Show details
|
|
88 |
Ημιαυτόματη επεξεργασία σωμάτων κειμένων στο πεδίο της βιοποικιλότητας με σκοπό την εξαγωγή ορολογίας ...
|
|
|
|
BASE
|
|
Show details
|
|
89 |
A Computational Study of American Sign Language Nonmanuals
|
|
|
|
In: http://rave.ohiolink.edu/etdc/view?acc_num=osu1436909704 (2015)
|
|
BASE
|
|
Show details
|
|
90 |
Latent Semantic Analysis, Corpus stylistics and Machine Learning Stylometry for Translational and Authorial Style Analysis: The Case of Denys Johnson-Davies’ Translations into English
|
|
|
|
In: http://rave.ohiolink.edu/etdc/view?acc_num=kent1429300641 (2015)
|
|
BASE
|
|
Show details
|
|
91 |
The eras and trends of automatic short answer grading
|
|
|
|
In: International journal of artificial intelligence in education 25 (2015) 1, S. 60-117 (2015)
|
|
BASE
|
|
Show details
|
|
93 |
Investigating the use of distributional semantic models for co-hyponym identification in special corpora
|
|
|
|
Abstract:
Knowledge is assumed by cognitive science to consist of concepts that are organised and maintained by complex processes taking place in human minds. These processes are not yet accessible directly. Language is still the primary medium for communicating knowledge and presumably linguistic objects and structures are expressions of knowledge and its organisation in mind. Collecting terms (i.e., creating a specialised vocabulary) and capturing their relationships are thus important mechanisms for distilling knowledge from specialised texts and for formalising it for machines. The approach taken in this thesis is to analyse the co-hyponymy relationships between terms as an organisational mechanism. Co-hyponyms are sets of lexical units sharing a common hypernym; bank and building society, for example, are co-hyponyms of the hypernym financial organisation. Analysing the co-hyponymy relationships between terms is important because it bridges the semantic gap between a) specialised lexical knowledge, b) the quantitative interpretation of meanings in specialised discourse, and c) machine-accessible conceptualisation of knowledge. This thesis proposes the use of a vector-based distributional representation of terms in order to construct a quantitative conceptual model of kinds-sorts in a given field of knowledge. Among empirical methods for analysing linguistic structures, distributional approaches to semantics encode language data to models that should correspond to the meanings of linguistic entities. The meaning of an entity, such as a word or a phrase, is assumed to be a function of its statistical distribution in contexts. In order to use these methods we thus need to define (a) the contexts, that is, which statistical information must be collected; and (b) the functions, that is, how this information must be used to correlate with a meaning. This thesis is a study of corpus-based distributional methods for characterising co-hyponymy between terms. Terms are represented as vectors to form a so-called term-space model. To obviate the curse of dimensionality and to facilitate the construction of models, novel methods employing sparse random projections are proposed. Random Manhattan indexing is used to construct L1-normed spaces and random indexing for L2-normed spaces. Following these steps a memory-based classifier exploits the distance between vectors to identify the presence of targeted co-hyponymy relationships. An evaluation is also performed to assess any reciprocal influences of the method's parameters on its performance. Userfriendliness, flexibility in updating and maintenance, and an innate capacity to resemble conceptual structures in a domain knowledge are the advantages of this method.
|
|
Keyword:
Computational linguistics; Computational terminology; Data mining; Distributional semantic models; Information extraction; Insight Centre for Data Analytics; Lexicon; Machine learning; Natural langauge processing; Random projections; Semantics; Statistical natural language processing; Terminology
|
|
URL: http://hdl.handle.net/10379/5205
|
|
BASE
|
|
Hide details
|
|
94 |
The duality of expertise: identifying expertise claims and community opinions within online forum dialogue
|
|
|
|
BASE
|
|
Show details
|
|
95 |
Sentiment Data Flow Analysis by Means of Dynamic Linguistic Patterns
|
|
|
|
BASE
|
|
Show details
|
|
96 |
An Information theoretic approach to production and comprehension of discourse markers ...
|
|
|
|
BASE
|
|
Show details
|
|
97 |
Analysing entity context in multilingual wikipedia to support entity-centric retrieval applications ...
|
|
|
|
BASE
|
|
Show details
|
|
100 |
Explaining Delta, Or: How Do Distance Measures For Authorship Attribution Work? ...
|
|
|
|
BASE
|
|
Show details
|
|
Page: 1 2 3 4 5 6 7 8 9... 15
|
|