DE eng

Search in the Catalogues and Directories

Hits 1 – 14 of 14

1
On topic validity in speaking tests
Khabbazbashi, Nahal. - : Cambridge University Press, 2022
BASE
Show details
2
Towards the new construct of academic English in the digital age
Khabbazbashi, Nahal; Chan, Sathena Hiu Chong; Clark, Tony. - : Oxford University Press, 2022
BASE
Show details
3
Validation of a large-scale task-based test: functional progression in dialogic speaking performance ; Task-based language teaching and assessment: Contemporary reflections from across the world
Inoue, Chihiro; Nakatsuhara, Fumiyo. - : Springer Nature, 2022
BASE
Show details
4
Eye-tracking L2 students taking online multiple-choice reading tests: benefits and challenges
Latimer, Nicola; Chan, Sathena Hiu Chong. - : Cranmore Publishing, 2022
BASE
Show details
5
The design and validation of an online speaking test for young learners in Uruguay: challenges and innovations
Khabbazbashi, Nahal; Nakatsuhara, Fumiyo; Inoue, Chihiro. - : Cranmore Publishing on behalf of the International TESOL Union, 2022
BASE
Show details
6
Video-conferencing speaking tests: do they measure the same construct as face-to-face tests?
BASE
Show details
7
The effects of extended planning time on candidates’ performance, processes and strategy use in the lecture listening-into-speaking tasks of the TOEFL iBT Test
Inoue, Chihiro; Lam, Daniel M. K.. - : Wiley, 2021
BASE
Show details
8
Exploring the potential for assessing interactional and pragmatic competence in semi-direct speaking tests
Nakatsuhara, Fumiyo; May, Lyn; Inoue, Chihiro. - : British Council, 2021
BASE
Show details
9
Comparing rating modes: analysing live, audio, and video ratings of IELTS Speaking Test performances
Nakatsuhara, Fumiyo; Inoue, Chihiro; Taylor, Lynda. - : Taylor & Francis, 2020
BASE
Show details
10
Opening the black box: exploring automated speaking evaluation ; Issues in Language Testing Around the World: Insights for Language Test Users.
Abstract: The rapid advances in speech processing and machine learning technologies have attracted language testers’ strong interest in developing automated speaking assessment in which candidate responses are scored by computer algorithms rather than trained human examiners. Despite its increasing popularity, automatic evaluation of spoken language is still shrouded in mystery and technical jargon, often resembling an opaque "black box" that transforms candidate speech to scores in a matter of minutes. Our chapter explicitly problematizes this lack of transparency around test score interpretation and use and asks the following questions: What do automatically derived scores actually mean? What are the speaking constructs underlying them? What are some common problems encountered in automated assessment of speaking? And how can test users evaluate the suitability of automated speaking assessment for their proposed test uses? In addressing these questions, the purpose of our chapter is to explore the benefits, problems, and caveats associated with automated speaking assessment touching on key theoretical discussions on construct representation and score interpretation as well as practical issues such as the infrastructure necessary for capturing high quality audio and the difficulties associated with acquiring training data. We hope to promote assessment literacy by providing the necessary guidance for users to critically engage with automated speaking assessment, pose the right questions to test developers, and ultimately make informed decisions regarding the fitness for purpose of automated assessment solutions for their specific learning and assessment contexts. ; https://www.springer.com/gp/open-access/publication-policies/self-archiving-policy archiving AAM permitted with 24m embargo
Keyword: language assessment; learning technology; speaking; Subject Categories::X162 Teaching English as a Foreign Language (TEFL)
URL: https://doi.org/10.1007/978-981-33-4232-3
http://hdl.handle.net/10547/624618
BASE
Hide details
11
Comparing writing proficiency assessments used in professional medical registration: a methodology to inform policy and practice
Chan, Sathena Hiu Chong; Taylor, Lynda. - : Elsevier, 2020
BASE
Show details
12
Applying the socio-cognitive framework: gathering validity evidence during the development of a speaking test ; Lessons and Legacy: A Tribute to Professor Cyril J Weir (1950–2018)
Nakatsuhara, Fumiyo; Dunlea, Jamie. - : UCLES/Cambridge University Press, 2020
BASE
Show details
13
Research and practice in assessing academic reading: the case of IELTS
Weir, Cyril J.; Chan, Sathena Hiu Chong. - : Cambridge University Press, 2020
BASE
Show details
14
The Impact of input task characteristics on performance on an integrated listening-Into-writing EAP assessment
Westbrook, Carolyn. - : University of Bedfordshire, 2019
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
14
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern