DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 1 – 20 of 25

1
Validation of a large-scale task-based test: functional progression in dialogic speaking performance ; Task-based language teaching and assessment: Contemporary reflections from across the world
Inoue, Chihiro; Nakatsuhara, Fumiyo. - : Springer Nature, 2022
BASE
Show details
2
The design and validation of an online speaking test for young learners in Uruguay: challenges and innovations
Khabbazbashi, Nahal; Nakatsuhara, Fumiyo; Inoue, Chihiro. - : Cranmore Publishing on behalf of the International TESOL Union, 2022
BASE
Show details
3
Towards new avenues for the IELTS Speaking Test: insights from examiners’ voices
BASE
Show details
4
Video-conferencing speaking tests: do they measure the same construct as face-to-face tests?
BASE
Show details
5
The effects of extended planning time on candidates’ performance, processes and strategy use in the lecture listening-into-speaking tasks of the TOEFL iBT Test
Inoue, Chihiro; Lam, Daniel M. K.. - : Wiley, 2021
BASE
Show details
6
Exploring the potential for assessing interactional and pragmatic competence in semi-direct speaking tests
Nakatsuhara, Fumiyo; May, Lyn; Inoue, Chihiro. - : British Council, 2021
BASE
Show details
7
Task parallelness: investigating the difficulty of two spoken narrative tasks
Inoue, Chihiro. - 2020
BASE
Show details
8
Comparing rating modes: analysing live, audio, and video ratings of IELTS Speaking Test performances
Nakatsuhara, Fumiyo; Inoue, Chihiro; Taylor, Lynda. - : Taylor & Francis, 2020
BASE
Show details
9
Investigating the use of language functions for validating speaking test specifications
Inoue, Chihiro. - 2020
BASE
Show details
10
Exploring the use of video-conferencing technology to deliver the IELTS Speaking Test: Phase 3 technical trial
Berry, Vivien; Nakatsuhara, Fumiyo; Inoue, Chihiro. - : IELTS Partners: British Council, Cambridge Assessment English and IDP: IELTS Australia, 2020
BASE
Show details
11
The IELTS Speaking Test: what can we learn from examiner voices?
BASE
Show details
12
Academic speaking: does the construct exist, and if so, how do we test it?
BASE
Show details
13
Testing speaking skills: why and how?
BASE
Show details
14
Measuring L2 speaking
BASE
Show details
15
Exploring the use of video-conferencing technology in the assessment of spoken language: a mixed-methods study
Nakatsuhara, Fumiyo; Inoue, Chihiro; Berry, Vivien. - : Taylor & Francis, 2017
BASE
Show details
16
Developing rubrics to assess the reading-into-writing skills: a case study
BASE
Show details
17
Exploring performance across two delivery modes for the same L2 speaking test: face-to-face and video-conferencing delivery: a preliminary comparison of test-taker and examiner behaviour
Nakatsuhara, Fumiyo; Inoue, Chihiro; Berry, Vivien. - : The IELTS Partners: British Council, Cambridge English Language Assessment and IDP: IELTS Australia, 2017
BASE
Show details
18
Exploring performance across two delivery modes for the IELTS Speaking Test: face-to-face and video-conferencing delivery (Phase 2)
Nakatsuhara, Fumiyo; Berry, Vivien; Inoue, Chihiro. - : IELTS Partners, 2017
BASE
Show details
19
Accuracy across proficiency levels: A learner corpus approach. Jennifer Thewissen. Presses Universitaires de Louvain, Lougain-la-Neuve, Belgium (2015). 342pp.
Inoue, Chihiro. - : Elsevier, 2017
BASE
Show details
20
A comparative study of the variables used to measure syntactic complexity and accuracy in task-based research
Inoue, Chihiro. - : Taylor & Francis (Routledge): SSH Titles, 2017
Abstract: The constructs of complexity, accuracy and fluency (CAF) have been used extensively to investigate learner performance on second language tasks. However, a serious concern is that the variables used to measure these constructs are sometimes used conventionally without any empirical justification. It is crucial for researchers to understand how results might be different depending on which measurements are used, and accordingly, choose the most appropriate variables for their research aims. The first strand of this article examines the variables conventionally used to measure syntactic complexity in order to identify which may be the best indicators of different proficiency levels, following suggestions by Norris and Ortega. The second strand compares the three variables used to measure accuracy in order to identify which one is most valid. The data analysed were spoken performances by 64 Japanese EFL students on two picture-based narrative tasks, which were rated at Common European Framework of Reference for Languages (CEFR) A2 to B2 according to Rasch-adjusted ratings by seven human judges. The tasks performed were very similar, but had different degrees of what Loschky and Bley-Vroman term ‘task-essentialness’ for subordinate clauses. It was found that the variables used to measure syntactic complexity yielded results that were not consistent with suggestions by Norris and Ortega. The variable found to be the most valid for measuring accuracy was errors per 100 words. Analysis of transcripts revealed that results were strongly influenced by the differing degrees of task-essentialness for subordination between the two tasks, as well as the spread of errors across different units of analysis. This implies that the characteristics of test tasks need to be carefully scrutinised, followed by careful piloting, in order to ensure greater validity and reliability in task-based research.
Keyword: accuracy; speaking; speech communication; syntactic complexity; task-based research
URL: http://hdl.handle.net/10547/621953
https://doi.org/10.1080/09571736.2015.1130079
BASE
Hide details

Page: 1 2

Catalogues
1
0
0
0
2
0
0
Bibliographies
1
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
22
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern