1 |
Comparing rating modes: analysing live, audio, and video ratings of IELTS Speaking Test performances
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Interactional competence in the workplace: challenges and opportunities
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Framing academic literacy: considerations and implications for language assessment
|
|
|
|
BASE
|
|
Show details
|
|
4 |
Automated approaches to establishing context validity in reading tests
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Comparing writing proficiency assessments used in professional medical registration: a methodology to inform policy and practice
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Assessing reading-into-writing skills for an academic context: some theoretical and practical considerations
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Re-engineering a speaking test used for university admissions purposes: considerations and constraints: the case of IELTS
|
|
|
|
BASE
|
|
Show details
|
|
8 |
General Language Proficiency (GLP): reflections on the "issues revisited" from the perspective of a UK examination board
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Recommending a nursing-specific passing standard for the IELTS examination
|
|
|
|
BASE
|
|
Show details
|
|
11 |
Academic speaking: does the construct exist, and if so, how do we test it?
|
|
|
|
BASE
|
|
Show details
|
|
12 |
The role of academic institutions in the development of language testing and assessment (LTA)
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Research and practice in assessing academic English: the case of IELTS
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Interactional competence: conceptualisations, operationalisations, and outstanding questions
|
|
|
|
BASE
|
|
Show details
|
|
16 |
The role of the L1 in testing L2 English ; Ontologies of English. Conceptualising the language for learning, teaching, and assessment
|
|
|
|
BASE
|
|
Show details
|
|
17 |
Developing rubrics to assess the reading-into-writing skills: a case study
|
|
|
|
Abstract:
The integrated assessment of language skills, particularly reading-into-writing, is experiencing a renaissance. The use of rating rubrics, with verbal descriptors that describe quality of L2 writing performance, in large scale assessment is well-established. However, less attention has been directed towards the development of reading-into-writing rubrics. The task of identifying and evaluating the contribution of reading ability to the writing process and product so that it can be reflected in a set of rating criteria is not straightforward. This paper reports on a recent project to define the construct of reading-into-writing ability for designing a suite of integrated tasks at four proficiency levels, ranging from CEFR A2 to C1. The authors discuss how the processes of theoretical construct definition, together with empirical analyses of test taker performance, were used to underpin the development of rating rubrics for the reading-into-writing tests. Methodologies utilised in the project included questionnaire, expert panel judgement, group interview, automated textual analysis and analysis of rater reliability. Based on the results of three pilot studies, the effectiveness of the rating rubrics is discussed. The findings can inform decisions about how best to account for both the reading and writing dimensions of test taker performance in the rubrics descriptors.
|
|
Keyword:
CEFR; integrated tasks; L2 writing; language assessment; Q110 Applied Linguistics; reading-into-writing; scoring; writing; writing assessment
|
|
URL: http://hdl.handle.net/10547/621934 https://doi.org/10.1016/j.asw.2015.07.004
|
|
BASE
|
|
Hide details
|
|
18 |
Reviewing the suitability of English language tests for providing the GMC with evidence of doctors' English proficiency
|
|
|
|
BASE
|
|
Show details
|
|
|
|