1 |
Mapping theoretical and methodological perspectives for understanding speech interface interactions ; CHI EA '19 Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Multimodal Continuous Turn-Taking Prediction Using Multiscale RNNs ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Multimodal continuous turn-taking prediction using multiscale RNNs ; ICMI 2018 - 20th ACM International Conference on Multimodal Interaction
|
|
|
|
BASE
|
|
Show details
|
|
4 |
Survival at the museum: A cooperation experiment with emotionally expressive virtual characters ; ICMI '18 Proceedings of the 20th ACM International Conference on Multimodal Interaction
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Towards predicting dialog acts from previous speakers' non-verbal cues ; BIBTEX 2017
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Building a Database of Political Speech Does Culture Matter in Charisma Annotations?
|
|
|
|
In: Conference papers (2014)
|
|
BASE
|
|
Show details
|
|
11 |
Speech Intelligibility prediction using a Neurogram Similarity Index Measure
|
|
|
|
BASE
|
|
Show details
|
|
13 |
Error Metrics for Impaired Auditory Nerve Responses of Different Phoneme Groups
|
|
|
|
In: Conference papers (2009)
|
|
BASE
|
|
Show details
|
|
14 |
Measurement of Phonemic Degradation in Sensorineural Hearing Loss using a Computational Model of the Auditory Periphery
|
|
|
|
In: Conference papers (2009)
|
|
BASE
|
|
Show details
|
|
15 |
Error Metrics for Impaired Auditory Nerve Responses of Different Phoneme Groups ; Interspeech 2009
|
|
|
|
BASE
|
|
Show details
|
|
16 |
Measurement of phonemic degradation in sensorineural hearing loss using a computational model of the auditory periphery ; IET Irish Signals and Systems Conference ISSC 2009
|
|
|
|
BASE
|
|
Show details
|
|
17 |
On Parsing Visual Sequences with the Hidden Markov Model
|
|
|
|
Abstract:
PUBLISHED ; Hidden Markov Models have been employed in many vision applications to model and identify events of interest. Their useis common in applications where HMMs are used to classify previously divided segments of video as one of a set of eventsbeing modelled. HMMs can also simultaneously segment and classify events within a continuous video, without the need fora separate first step to identify the start and end of the events. This is significantly less common. This paper is an exploration of thedevelopment of HMM frameworks for such complete event recognition. A review of how HMMs have been applied to both eventclassification and recognition is presented. The discussion evolves in parallel with an example of a real application in psychology forillustration. The complete videos depict sessions where candidates perform a number of different exercises under the instructionof a psychologist. The goal is to isolate portions of video containing just one of these exercises. The exercise involves rotating thehead of a kneeling subject to the left, back to centre, to the right, to the centre, and repeating a number of times. By designing aHMM system to automatically isolate portions of video containing this exercise, issues such as the strategy of choice of event tobe modelled, feature design and selection, as well as training and testing are reviewed. Thus this paper shows how HMMs can bemore extensively applied in the domain of event recognition in video.
|
|
Keyword:
Digital Engagement; Head rotation; Hidden Markov Models; Motion vector; Rotation event; Sign language; Speech recognition; Telecommunications
|
|
URL: http://hdl.handle.net/2262/89613 http://people.tcd.ie/nharte https://doi.org/10.1155/2009/924287 https://link.springer.com/article/10.1155/2009/924287
|
|
BASE
|
|
Hide details
|
|
18 |
Discriminitive Multi-Resolution Sub-Band and Segmental Phonetic Model Combination
|
|
|
|
BASE
|
|
Show details
|
|
19 |
A Novel Model For Phoneme Recognition Using Phonetically Derived Features ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|