DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 1 – 20 of 32

1
Next-gen sequencing identifies non-coding variation disrupting miRNA-binding sites in neurological disorders
BASE
Show details
2
Natural SQL: Making SQL Easier to Infer from Natural Language Specifications
Gan, Y; Chen, X; Xie, J. - 2021
BASE
Show details
3
Towards Robustness of Text-to-SQL Models against Synonym Substitution
Gan, Y; Chen, X; Huang, Q. - 2021
BASE
Show details
4
Correcting Knowledge Base Assertions
Chen, J.; Chen, X.; Horrocks, I.. - : Association for Computing Machinery, 2020
BASE
Show details
5
Study of central exclusive [Image: see text] production in proton-proton collisions at [Formula: see text] and 13TeV
In: Eur Phys J C Part Fields (2020)
BASE
Show details
6
The relationship between English proficiency and humour appreciation among English L1 users and Chinese L2 users of English
Chen, X.; Dewaele, Jean-Marc. - : De Gruyter, 2019
BASE
Show details
7
The flowering of positive psychology in Foreign Language Teaching and Acquisition research
Dewaele, Jean-Marc; Chen, X.; Padilla, A.. - : Frontiers Media, 2019
BASE
Show details
8
Exploiting future word contexts in neural network language models for speech recognition
Chen, X.; Liu, X.; Wang, Y.. - : Institute of Electrical and Electronics Engineers (IEEE), 2019
BASE
Show details
9
Survival percentages of atraumatic restorative treatment (ART) restorations and sealants in posterior teeth: an updated systematic review and meta-analysis [<Journal>]
Amorim, R. G. de [Verfasser]; Frencken, J. E. [Verfasser]; Raggio, D. P. [Verfasser].
DNB Subject Category Language
Show details
10
Disparities in Diabetes Care Quality by English Language Preference in Community Health Centers
In: Leung, LB; Vargas-Bustamante, A; Martinez, AE; Chen, X; & Rodriguez, HP. (2018). Disparities in Diabetes Care Quality by English Language Preference in Community Health Centers. Health Services Research, 53(1), 509 - 531. doi:10.1111/1475-6773.12590. UCLA: Retrieved from: http://www.escholarship.org/uc/item/40x4d7fn (2018)
BASE
Show details
11
Phonetic and graphemic systems for multi-genre broadcast transcription
Wang, Y.; Chen, X.; Gales, M.J.F.. - : IEEE, 2018
BASE
Show details
12
Active memory networks for language modeling
Chen, O.; Ragni, A.; Gales, M.. - : International Speech Communication Association (ISCA), 2018
BASE
Show details
13
Future word contexts in neural network language models
Chen, X.; Liu, X.; Ragni, A.. - : IEEE, 2018
BASE
Show details
14
Phonetic and graphemic systems for multi-genre broadcast transcription ...
Wang, Yu; Chen, X; Gales, Mark. - : Apollo - University of Cambridge Repository, 2018
BASE
Show details
15
Phonetic and graphemic systems for multi-genre broadcast transcription
Wang, Yu; Chen, X; Gales, Mark. - : IEEE, 2018. : https://ieeexplore.ieee.org/document/8462353, 2018. : ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, 2018
BASE
Show details
16
Future word contexts in neural network language models
Chen, X.; Liu, X.; Ragni, A.. - : IEEE, 2017
BASE
Show details
17
Investigating bidirectional recurrent neural network language models for speech recognition
Chen, X.; Ragni, A.; Liu, X.; Gales, M.J.F.. - : International Speech Communication Association (ISCA), 2017
Abstract: Recurrent neural network language models (RNNLMs) are powerful language modeling techniques. Significant performance improvements have been reported in a range of tasks including speech recognition compared to n-gram language models. Conventional n-gram and neural network language models are trained to predict the probability of the next word given its preceding context history. In contrast, bidirectional recurrent neural network based language models consider the context from future words as well. This complicates the inference process, but has theoretical benefits for tasks such as speech recognition as additional context information can be used. However to date, very limited or no gains in speech recognition performance have been reported with this form of model. This paper examines the issues of training bidirectional recurrent neural network language models (bi-RNNLMs) for speech recognition. A bi-RNNLM probability smoothing technique is proposed, that addresses the very sharp posteriors that are often observed in these models. The performance of the bi-RNNLMs is evaluated on three speech recognition tasks: broadcast news; meeting transcription (AMI); and low-resource systems (Babel data). On all tasks gains are observed by applying the smoothing technique to the bi-RNNLM. In addition consistent performance gains can be obtained by combining bi-RNNLMs with n-gram and uni-directional RNNLMs.
URL: https://www.isca-speech.org/archive/Interspeech_2017/abstracts/0513.html
http://eprints.whiterose.ac.uk/152811/8/Chen%20et%20al%202017%20Investigating%20bidirectional%20recurrent%20neural%20network%20ISCA.PDF
http://eprints.whiterose.ac.uk/152811/
BASE
Hide details
18
Search for dark matter produced in association with heavy-flavor quark pairs in proton-proton collisions at [Formula: see text]
Sirunyan, A. M.; Tumasyan, A.; Adam, W.. - : Springer Berlin Heidelberg, 2017
BASE
Show details
19
Developing Universal Dependencies for Mandarin Chinese
In: The 12th Workshop on Asian Language Resources ; https://halshs.archives-ouvertes.fr/halshs-01509329 ; The 12th Workshop on Asian Language Resources, 2016, Osaka, Japan (2016)
BASE
Show details
20
Multi-language neural network language models
Ragni, A.; Dakin, E.; Chen, X.. - : International Speech Communication Association (ISCA), 2016
BASE
Show details

Page: 1 2

Catalogues
0
0
2
0
1
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
29
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern