DE eng

Search in the Catalogues and Directories

Hits 1 – 14 of 14

1
Use of innovative technology in oral language assessment
Nakatsuhara, Fumiyo; Berry, Vivien. - : Taylor & Francis, 2021
BASE
Show details
2
Video-conferencing speaking tests: do they measure the same construct as face-to-face tests?
BASE
Show details
3
Exploring the use of video-conferencing technology to deliver the IELTS Speaking Test: Phase 3 technical trial
Berry, Vivien; Nakatsuhara, Fumiyo; Inoue, Chihiro. - : IELTS Partners: British Council, Cambridge Assessment English and IDP: IELTS Australia, 2020
BASE
Show details
4
Exploring the use of video-conferencing technology in the assessment of spoken language: a mixed-methods study
Nakatsuhara, Fumiyo; Inoue, Chihiro; Berry, Vivien. - : Taylor & Francis, 2017
BASE
Show details
5
Exploring performance across two delivery modes for the same L2 speaking test: face-to-face and video-conferencing delivery: a preliminary comparison of test-taker and examiner behaviour
Nakatsuhara, Fumiyo; Inoue, Chihiro; Berry, Vivien. - : The IELTS Partners: British Council, Cambridge English Language Assessment and IDP: IELTS Australia, 2017
BASE
Show details
6
Exploring performance across two delivery modes for the IELTS Speaking Test: face-to-face and video-conferencing delivery (Phase 2)
Abstract: Face-to-face speaking assessment is widespread as a form of assessment, since it allows the elicitation of interactional skills. However, face-to-face speaking test administration is also logistically complex, resource-intensive and can be difficult to conduct in geographically remote or politically sensitive areas. Recent advances in video-conferencing technology now make it possible to engage in online face-to-face interaction more successfully than was previously the case, thus reducing dependency upon physical proximity. A major study was, therefore, commissioned to investigate how new technologies could be harnessed to deliver the face-to-face version of the IELTS Speaking test. Phase 1 of the study, carried out in London in January 2014, presented results and recommendations of a small-scale initial investigation designed to explore what similarities and differences, in scores, linguistic output and test-taker and examiner behaviour, could be discerned between face-to-face and internet-based videoconferencing delivery of the Speaking test (Nakatsuhara, Inoue, Berry and Galaczi, 2016). The results of the analyses suggested that the speaking construct remains essentially the same across both delivery modes. This report presents results from Phase 2 of the study, which was a larger-scale followup investigation designed to: (i) analyse test scores obtained using more sophisticated statistical methods than was possible in the Phase 1 study (ii) investigate the effectiveness of the training for the video-conferencing- delivered test which was developed based on findings from the Phase 1 study (iii) gain insights into the issue of sound quality perception and its (perceived) effect (iv) gain further insights into test-taker and examiner behaviours across the two delivery modes (v) confirm the results of the Phase 1 study. Phase 2 of the study was carried out in Shanghai, People’s Republic of China in May 2015. Ninety-nine (99) test-takers each took two speaking tests under face-to-face and internet-based video-conferencing conditions. Performances were rated by 10 trained IELTS examiners. A convergent parallel mixed-methods design was used to allow for collection of an in-depth, comprehensive set of findings derived from multiple sources. The research included an analysis of rating scores under the two delivery conditions, test-takers’ linguistic output during the tests, as well as short interviews with test-takers following a questionnaire format. Examiners responded to two feedback questionnaires and participated in focus group discussions relating to their behaviour as interlocutors and raters, and to the effectiveness of the examiner training. Trained observers also took field notes from the test sessions and conducted interviews with the test-takers. Many-Facet Rasch Model (MFRM) analysis of test scores indicated that, although the video-conferencing mode was slightly more difficult than the face-to-face mode, when the results of all analytic scoring categories were combined, the actual score difference was negligibly small, thus supporting the Phase 1 findings. Examination of language functions elicited from test-takers revealed that significantly more test-takers asked questions to clarify what the examiner said in the video-conferencing mode (63.3%) than in the face-to-face mode (26.7%) in Part 1 of the test. Sound quality was generally positively perceived in this study, being reported as 'Clear' or 'Very clear', although the examiners and observers tended to perceive it more positively than the test-takers. There did not seem to be any relationship between sound quality perceptions and the proficiency level of test-takers. While 71.7% of test-takers preferred the face-to-face mode, slightly more test-takers reported that they were more nervous in the face-to-face mode (38.4%) than in the video-conferencing mode (34.3%). All examiners found the training useful and effective, the majority of them (80%) reporting that the two modes gave test-takers equal opportunity to demonstrate their level of English proficiency. They also reported that it was equally easy for them to rate test-taker performance in face-to-face and video-conferencing modes. The report concludes with a list of recommendations for further research, including suggestions for further examiner and test-taker training, resolution of technical issues regarding video-conferencing delivery and issues related to rating, before any decisions about deploying a video-conferencing mode of delivery for the IELTS Speaking test are made. ; Funded by the IELTS Partners: British Council, Cambridge English Language Assessment and IDP: IELTS Australia ; File attached is copyright so not able to be passed to repository. However since is report, fulltext not necessary for REF so passing metadata only RVO 5/10/17
Keyword: language assessment; language testing; mixed-methods research; Q330 English as a second language; second language; speaking
URL: http://hdl.handle.net/10547/622263
BASE
Hide details
7
Mind the gap – bringing teachers into the language literacy debate
BASE
Show details
8
Exploring teachers’ language assessment literacy: a social constructivist approach to understanding effective practice
BASE
Show details
9
What do teachers really want to know about assessment?
BASE
Show details
10
Singing from the same hymn sheet? What language assessment literacy means to teachers
BASE
Show details
11
Personality differences and oral test performance
Berry, Vivien [Verfasser]. - 2007
DNB Subject Category Language
Show details
12
Personality differences and oral test performance
Berry, Vivien. - Frankfurt am Main [u.a.] : Lang, 2007
UB Frankfurt Linguistik
Show details
13
Raising English language standards in Hong Kong
In: Language policy. - New York, NY : Springer 4 (2005) 4, 371-394
BLLDB
Show details
14
Reading-Writing Connections in E.A.P. Classes: A Content Analysis of Written Summaries Produced under Three Mediating Conditions
In: Regional Language Centre <Singapur>. RELC journal. - London : Sage 26 (1995) 2, 25-43
OLC Linguistik
Show details

Catalogues
1
0
1
0
1
0
0
Bibliographies
1
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
10
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern