DE eng

Search in the Catalogues and Directories

Hits 1 – 17 of 17

1
On topic validity in speaking tests
Khabbazbashi, Nahal. - : Cambridge University Press, 2022
BASE
Show details
2
The design and validation of an online speaking test for young learners in Uruguay: challenges and innovations
Khabbazbashi, Nahal; Nakatsuhara, Fumiyo; Inoue, Chihiro. - : Cranmore Publishing on behalf of the International TESOL Union, 2022
BASE
Show details
3
Use of innovative technology in oral language assessment
Nakatsuhara, Fumiyo; Berry, Vivien. - : Taylor & Francis, 2021
BASE
Show details
4
Video-conferencing speaking tests: do they measure the same construct as face-to-face tests?
BASE
Show details
5
Exploring the potential for assessing interactional and pragmatic competence in semi-direct speaking tests
Nakatsuhara, Fumiyo; May, Lyn; Inoue, Chihiro. - : British Council, 2021
BASE
Show details
6
Cognitive validity in the testing of speaking
Field, John. - 2020
BASE
Show details
7
Re-engineering a speaking test used for university admissions purposes: considerations and constraints: the case of IELTS
Taylor, Lynda. - 2020
BASE
Show details
8
Analysing multi-person discourse in group speaking tests: how do test-taker characteristics, task types and group sizes affect co-constructed discourse in groups?
BASE
Show details
9
Investigating the use of language functions for validating speaking test specifications
Inoue, Chihiro. - 2020
BASE
Show details
10
The IELTS Speaking Test: what can we learn from examiner voices?
BASE
Show details
11
Academic speaking: does the construct exist, and if so, how do we test it?
BASE
Show details
12
Testing speaking skills: why and how?
BASE
Show details
13
Applying the socio-cognitive framework: gathering validity evidence during the development of a speaking test ; Lessons and Legacy: A Tribute to Professor Cyril J Weir (1950–2018)
Nakatsuhara, Fumiyo; Dunlea, Jamie. - : UCLES/Cambridge University Press, 2020
BASE
Show details
14
Validating speaking test rating scales through microanalysis of fluency using PRAAT
BASE
Show details
15
Towards a model of multi-dimensional performance of C1 level speakers assessed in the Aptis Speaking Test
Tavakoli, Parveneh; Awwad, Anas; Nakatsuhara, Fumiyo. - : British Council, 2019
BASE
Show details
16
Exploring the use of video-conferencing technology in the assessment of spoken language: a mixed-methods study
Nakatsuhara, Fumiyo; Inoue, Chihiro; Berry, Vivien. - : Taylor & Francis, 2017
BASE
Show details
17
A comparative study of the variables used to measure syntactic complexity and accuracy in task-based research
Inoue, Chihiro. - : Taylor & Francis (Routledge): SSH Titles, 2017
Abstract: The constructs of complexity, accuracy and fluency (CAF) have been used extensively to investigate learner performance on second language tasks. However, a serious concern is that the variables used to measure these constructs are sometimes used conventionally without any empirical justification. It is crucial for researchers to understand how results might be different depending on which measurements are used, and accordingly, choose the most appropriate variables for their research aims. The first strand of this article examines the variables conventionally used to measure syntactic complexity in order to identify which may be the best indicators of different proficiency levels, following suggestions by Norris and Ortega. The second strand compares the three variables used to measure accuracy in order to identify which one is most valid. The data analysed were spoken performances by 64 Japanese EFL students on two picture-based narrative tasks, which were rated at Common European Framework of Reference for Languages (CEFR) A2 to B2 according to Rasch-adjusted ratings by seven human judges. The tasks performed were very similar, but had different degrees of what Loschky and Bley-Vroman term ‘task-essentialness’ for subordinate clauses. It was found that the variables used to measure syntactic complexity yielded results that were not consistent with suggestions by Norris and Ortega. The variable found to be the most valid for measuring accuracy was errors per 100 words. Analysis of transcripts revealed that results were strongly influenced by the differing degrees of task-essentialness for subordination between the two tasks, as well as the spread of errors across different units of analysis. This implies that the characteristics of test tasks need to be carefully scrutinised, followed by careful piloting, in order to ensure greater validity and reliability in task-based research.
Keyword: accuracy; speaking; speech communication; syntactic complexity; task-based research
URL: http://hdl.handle.net/10547/621953
https://doi.org/10.1080/09571736.2015.1130079
BASE
Hide details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
17
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern