DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 1 – 20 of 35

1
Eye-tracking as a research method in language testing
Brunfaut, Tineke. - : Springer, 2022
BASE
Show details
2
The Routledge handbook of second language acquisition and language testing
Winke, Paula (Herausgeber); Brunfaut, Tineke (Herausgeber). - London : Routledge, 2021
BLLDB
UB Frankfurt Linguistik
Show details
3
Testing young foreign language learners’ reading comprehension:Exploring the effects of working memory, grade level, and reading task
BASE
Show details
4
Text authenticity in listening assessment:Can item writers be trained to produce authentic-sounding texts?
BASE
Show details
5
The Routledge handbook of second language acquisition and language testing
Winke, Paula; Brunfaut, Tineke. - : Routledge, 2021
BASE
Show details
6
Validating assessments for research purposes
Revesz, Andrea; Brunfaut, Tineke. - : Routledge, 2021
BASE
Show details
7
Perspectives on "knowing" a second language:What are we seeking to measure?
Winke, Paula; Brunfaut, Tineke. - : Routledge, 2021
BASE
Show details
8
Raters:Behavior and training
Pill, John; Smart, Cameron. - : Routledge, 2021
BASE
Show details
9
Motivational factors in computer-administered integrated skills tasks:A study of young learners
BASE
Show details
10
Trajectories of language assessment literacy in a teacher-researcher partnership:Locating elements of praxis through narrative inquiry
Harding, Luke; Brunfaut, Tineke. - : Springer, 2020
BASE
Show details
11
International language proficiency standards in the local context:Interpreting the CEFR in standard setting for exam reform in Luxembourg
BASE
Show details
12
Towards social justice for item writers:Empowering item writers through language assessment literacy training
Abstract: Item writers play a key role in the language test cycle, as they essentially need to operationalise the construct into actual tasks. Often, however, these assessment professionals receive a rather narrowly-focused training in writing items to a particular set of specifications. Usually, this training is limited to ‘item writing guidelines’ or instruction in mechanical aspects of item writing. In this presentation, we argue that mechanical training is not sufficient for item writers to consistently produce high-quality items. Item writers need to be empowered through more comprehensive assessment literacy training so that they acquire a deeper understanding of not only ‘how’ but also ‘why’ item specifications contain specific requirements and how their work contributes to test validity. Our viewpoint is based on a study in which we explored the effectiveness of a three-month online item writer course that included instruction in writing specific item types as well as in broader language assessment principles. In our talk, we will provide an overview of the course’s content, and explain how we evaluated the training through a ‘pretest-posttest’ design. More specifically, the 25 novice item writers participating in the study completed three item writing tasks prior to the course as well as after having finished it. The quality of the items they produced were evaluated by expert item reviewers. In addition, pre- and post-course stimulated recall interviews were conducted with the item writers, course feedback was collected at various points during the training, and online group discussions were analysed. In our presentation, we will primarily focus on the interviews, which aimed to gain information on the approach, procedures, and techniques the trainees used while writing items, the difficulties they encountered, and their post-course reflections on the training’s effect on their item writing skills. Analysis of the interview data revealed that improvements in item quality pre- to post-course were associated with an increased awareness post-course – on behalf of the item writer – of fundamental assessment principles such as authenticity and fairness, a deeper understanding of the test construct, and conscious attempts to improve the validity of items by avoiding bias, construct underrepresentation, and construct-irrelevant variance. The findings suggest that bringing item writers to stage 4 of Pill & Harding’s (2013) language assessment literacy continuum, i.e. procedural and conceptual literacy, empowers item writers in their work by enabling them to develop higher-quality items, and by making them more interested and more confident in writing language test items. Our study has practical implications for item writer training and, more fundamentally, for the empowerment of item writers and the development of good-quality tests.
URL: https://eprints.lancs.ac.uk/id/eprint/148299/
BASE
Hide details
13
Choosing test formats and task types
BASE
Show details
14
SLA researcher assessment literacy
Harding, Luke; Kremmel, Benjamin. - : Routledge, 2020
BASE
Show details
15
Is anybody listening?:The nature of second language listening in integrated listening-to-summarize tasks
BASE
Show details
16
Language testing in the ‘hostile environment’:The discursive construction of ‘secure English language testing’ in the United Kingdom
BASE
Show details
17
The role of working memory in young second language learners’ written performances
BASE
Show details
18
Going online:The effect of mode of delivery on performances and perceptions on an English L2 writing test suite
BASE
Show details
19
Integrative testing of listening
Brunfaut, Tineke; Rukthong, Anchana. - : John Wiley & Sons, Inc., 2018
BASE
Show details
20
Exploring the role of phraseological knowledge in foreign language reading
BASE
Show details

Page: 1 2

Catalogues
1
0
5
0
0
0
0
Bibliographies
3
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
29
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern