DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5...21
Hits 1 – 20 of 419

1
BERT-based Semantic Model for Rescoring N-best Speech Recognition List
In: INTERSPEECH 2021 ; https://hal.archives-ouvertes.fr/hal-03248881 ; INTERSPEECH 2021, Aug 2021, Brno, Czech Republic ; https://www.interspeech2021.org/ (2021)
BASE
Show details
2
Introduction of semantic model to help speech recognition
In: TSD 2020 - Twenty-third International Conference on Text, Speech and Dialogue ; https://hal.archives-ouvertes.fr/hal-02862245 ; TSD 2020 - Twenty-third International Conference on Text, Speech and Dialogue, Sep 2020, Brno, Czech Republic (2020)
BASE
Show details
3
Emoción, percepción, producción: un estudio psicolingüístico para detectar emociones en el habla
Gibson, M. (Mark); González-Machorro, M. (Mónica). - 2020
BASE
Show details
4
"Grumpy" or "furious"? arousal of emotion labels influences judgments of facial expressions
Barker, Megan S.; Bidstrup, Emma M.; Robinson, Gail A.. - : Public Library of Science, 2020
BASE
Show details
5
Combining speech-based and linguistic classifiers to recognize emotion in user spoken utterances
BASE
Show details
6
Speech Perception: Phonological Neighborhood Effects on Word Recognition Persist Despite Semantic Sentence Context
González Álvarez, Julio; Cervera Crespo, Teresa. - : SAGE Publications, 2019
BASE
Show details
7
Inferring Availability for Communication in Smart Homes Using Context
In: PerCom 2018 - IEEE International Conference on Pervasive Computing and Communications ; https://hal.archives-ouvertes.fr/hal-01762137 ; PerCom 2018 - IEEE International Conference on Pervasive Computing and Communications, Mar 2018, Athènes, Greece. pp.1-6 (2018)
BASE
Show details
8
Automatic Recognition of Affective Laughter in Spontaneous Dyadic Interactions from Audiovisual Signals
In: International Conference on Multimodal Interaction (ICMI 2018) ; https://hal.archives-ouvertes.fr/hal-01994000 ; International Conference on Multimodal Interaction (ICMI 2018), Oct 2018, Boulder, CO, United States. pp.220-228, ⟨10.1145/3242969.3243012⟩ (2018)
BASE
Show details
9
A developmental perspective on processing semantic context: preliminary evidence from sentential auditory word repetition in school-aged children
Mahler, N. A.; Chenery, H. J.. - : Springer, 2018
BASE
Show details
10
Modelling Semantic Context of OOV Words in Large Vocabulary Continuous Speech Recognition
In: ISSN: 2329-9290 ; EISSN: 2329-9304 ; IEEE/ACM Transactions on Audio, Speech and Language Processing ; https://hal.inria.fr/hal-01461617 ; IEEE/ACM Transactions on Audio, Speech and Language Processing, Institute of Electrical and Electronics Engineers, 2017, 25 (3), pp.598 - 610. ⟨10.1109/TASLP.2017.2651361⟩ (2017)
BASE
Show details
11
Word Recognition in High and Low Skill Spellers: Context effects on Lexical Ambiguity Resolution
In: http://rave.ohiolink.edu/etdc/view?acc_num=kent1493035902158255 (2017)
BASE
Show details
12
Resources For Robotology/Natural-Speech ...
Badino, Leonardo; Higy, Bertrand. - : Zenodo, 2017
BASE
Show details
13
Resources For Robotology/Natural-Speech ...
Badino, Leonardo; Higy, Bertrand. - : Zenodo, 2017
BASE
Show details
14
Learning Spatial-Semantic Context with Fully Convolutional Recurrent Network for Online Handwritten Chinese Text Recognition
Xie, Z; Sun, Z; Jin, L. - 2017
BASE
Show details
15
Lightweight Spoken Utterance Classification with CFG, tf-idf and Dynamic Programming
In: ISBN: 978-3-319-68455-0 ; Statistical Language and Speech Processing (SLSP) pp. 143-154 (2017)
BASE
Show details
16
Recurrent neural network language models for automatic speech recognition
Gangireddy, Siva Reddy. - : The University of Edinburgh, 2017
Abstract: The goal of this thesis is to advance the use of recurrent neural network language models (RNNLMs) for large vocabulary continuous speech recognition (LVCSR). RNNLMs are currently state-of-the-art and shown to consistently reduce the word error rates (WERs) of LVCSR tasks when compared to other language models. In this thesis we propose various advances to RNNLMs. The advances are: improved learning procedures for RNNLMs, enhancing the context, and adaptation of RNNLMs. We learned better parameters by a novel pre-training approach and enhanced the context using prosody and syntactic features. We present a pre-training method for RNNLMs, in which the output weights of a feed-forward neural network language model (NNLM) are shared with the RNNLM. This is accomplished by first fine-tuning the weights of the NNLM, which are then used to initialise the output weights of an RNNLM with the same number of hidden units. To investigate the effectiveness of the proposed pre-training method, we have carried out text-based experiments on the Penn Treebank Wall Street Journal data, and ASR experiments on the TED lectures data. Across the experiments, we observe small but significant improvements in perplexity (PPL) and ASR WER. Next, we present unsupervised adaptation of RNNLMs. We adapted the RNNLMs to a target domain (topic or genre or television programme (show)) at test time using ASR transcripts from first pass recognition. We investigated two approaches to adapt the RNNLMs. In the first approach the forward propagating hidden activations are scaled - learning hidden unit contributions (LHUC). In the second approach we adapt all parameters of RNNLM.We evaluated the adapted RNNLMs by showing the WERs on multi genre broadcast speech data. We observe small (on an average 0.1% absolute) but significant improvements in WER compared to a strong unadapted RNNLM model. Finally, we present the context-enhancement of RNNLMs using prosody and syntactic features. The prosody features were computed from the acoustics of the context words and the syntactic features were from the surface form of the words in the context. We trained the RNNLMs with word duration, pause duration, final phone duration, syllable duration, syllable F0, part-of-speech tag and Combinatory Categorial Grammar (CCG) supertag features. The proposed context-enhanced RNNLMs were evaluated by reporting PPL and WER on two speech recognition tasks, Switchboard and TED lectures. We observed substantial improvements in PPL (5% to 15% relative) and small but significant improvements in WER (0.1% to 0.5% absolute).
Keyword: adaptation; automatic speech recognition; CCG Supertags; context-enhancement; language modelling; LHUC; MGB Challenge; N-grams; NNLM; POS; pre-training; prosody features; RNNLM; switchboard; syntactic features; TED Talks
URL: http://hdl.handle.net/1842/28990
BASE
Hide details
17
Investigation of Back-off Based Interpolation Between Recurrent Neural Network and N-gram Language Models (Author's Manuscript)
BASE
Show details
18
Insight through uncertainty: a review of the literature on the effects of cognitive processes and schema on responses to elicitation (‘projective’) techniques in evaluation and research interviews
BASE
Show details
19
Introduction to New Work on Immigration and Identity in Contemporary France, Québec, and Ireland
In: CLCWeb: Comparative Literature and Culture (2016)
BASE
Show details
20
Thematic Bibliography to New Work on Immigration and Identity in Contemporary France, Québec, and Ireland
In: CLCWeb: Comparative Literature and Culture (2016)
BASE
Show details

Page: 1 2 3 4 5...21

Catalogues
25
0
87
0
0
0
0
Bibliographies
288
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
129
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern