DE eng

Search in the Catalogues and Directories

Hits 1 – 5 of 5

1
Challenges in Audio Processing of Terrorist-Related Data
In: International Conference on Multimedia Modeling ; https://hal.archives-ouvertes.fr/hal-02415176 ; International Conference on Multimedia Modeling, Springer, Jan 2019, Thessaloniki, Greece (2019)
BASE
Show details
2
Challenges in Audio Processing of Terrorist-Related Data
In: International Conference on Multimedia Modeling ; https://hal.archives-ouvertes.fr/hal-02387373 ; International Conference on Multimedia Modeling, Springer, Jan 2019, Thessaloniki, Greece (2019)
BASE
Show details
3
Does the prosodic emphasis of sentential context cause deeper lexical-semantic processing?
In: ISSN: 2327-3798 ; EISSN: 2327-3801 ; Language, Cognition and Neuroscience ; https://hal.univ-lille.fr/hal-01917002 ; Language, Cognition and Neuroscience, Taylor and Francis, 2019, 34, pp.29-42. ⟨10.1080/23273798.2018.1499945⟩ (2019)
BASE
Show details
4
Acoustic event, spoken keyword and emotional outburst detection
Xu, Yijia. - 2019
BASE
Show details
5
Event Structure In Vision And Language
In: Publicly Accessible Penn Dissertations (2019)
Abstract: Our visual experience is surprisingly rich: We do not only see low-level properties such as colors or contours; we also see events, or what is happening. Within linguistics, the examination of how we talk about events suggests that relatively abstract elements exist in the mind which pertain to the relational structure of events, including general thematic roles (e.g., Agent), Causation, Motion, and Transfer. For example, “Alex gave Jesse flowers” and “Jesse gave Alex flowers” both refer to an event of transfer, with the directionality of the transfer having different social consequences. The goal of the present research is to examine the extent to which abstract event information of this sort (event structure) is generated in visual perceptual processing. Do we perceive this information, just as we do with more ‘traditional’ visual properties like color and shape? In the first study (Chapter 2), I used a novel behavioral paradigm to show that event roles – who is acting on whom – are rapidly and automatically extracted from visual scenes, even when participants are engaged in an orthogonal task, such as color or gender identification. In the second study (Chapter 3), I provided functional magnetic resonance (fMRI) evidence for commonality in content between neural representations elicited by static snapshots of actions and by full, dynamic action sequences. These two studies suggest that relatively abstract representations of events are spontaneously extracted from sparse visual information. In the final study (Chapter 4), I return to language, the initial inspiration for my investigations of events in vision. Here I test the hypothesis that the human brain represents verbs in part via their associated event structures. Using a model of verbs based on event-structure semantic features (e.g., Cause, Motion, Transfer), it was possible to successfully predict fMRI responses in language-selective brain regions as people engaged in real-time comprehension of naturalistic speech. Taken together, my research reveals that in both perception and language, the mind rapidly constructs a representation of the world that includes events with relational structure.
Keyword: action recognition; Cognitive Psychology; event perception; fMRI; Linguistics; Neuroscience and Neurobiology; semantic structure; thematic roles; visual perception
URL: https://repository.upenn.edu/cgi/viewcontent.cgi?article=5004&context=edissertations
https://repository.upenn.edu/edissertations/3218
BASE
Hide details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
5
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern