DE eng

Search in the Catalogues and Directories

Hits 1 – 1 of 1

1
How valid are domain experts' judgements of workplace communication? Implications for setting standards on the Occupational English Test (OET) Writing sub-test
Abstract: © 2018 Dr. Simon Davidson ; As part of the prerequisite to obtain professional registration and practice in Australia, International Medical Graduates (IMGs) must satisfy requirements that include satisfactory clinical knowledge and competent English language proficiency. However, there has been some stated apprehension that the minimum standards on tests used to assess language competency (including the Occupational English Test (OET), a specific-purpose language (LSP) test for health professionals), might be inadequate for successful workplace readiness. To better understand the validity of these concerns, this study examined the minimum standards on the OET Writing sub-test via the process of ‘standard setting’ – a procedure for drawing insights from appropriate stakeholders (in this case doctors with experience of workplace communication demands) about levels of proficiency viewed as satisfactory for a particular purpose. The study sought to determine the minimum levels of competence deemed appropriate for effective performance in the workplace, and also to understand the basis for the decisions made and how closely these corresponded to the construct of communication that the OET is designed to measure. A previous study (Manias & McNamara, 2016; Pill & McNamara, 2016) explored these issues in relation to the OET Speaking sub-test, whereas the current study focuses on writing – a thus far neglected area. The writing task on the OET is a letter of referral, based on a set of provided case notes. 18 health professionals (all with experience of working as medical educators, GPs or specialists) were recruited to participate in standard-setting workshops designed to elicit decisions about what level of performance on this task was deserving of a passing grade and why. To gain further insight into the basis for the standards set, verbal reports in the form of a think-aloud protocol (TAP) were employed. The doctors’ comments from the workshops and verbal reports were thematically coded and intercoder reliability checks were conducted. Before new passing standards and ‘cut scores’ were calculated, a FACETS analysis (Linacre, 2017) was carried out to take into account any variation in domain experts’ ratings in terms of them being overly severe/lenient or inconsistent. The final quantitative analysis yielded a somewhat more stringent passing standard than the current one – mirroring the results of previous studies using the Analytic Judgement method (AJM) (e.g., Knoch et al., 2017 and Pill & McNamara, 2016). The new standards were compared with current OET cut scores and indicated a higher ‘fail’ rate with the current data set. The stricter passing standard established by domain experts in this study could be construed as backing indications and perceptions that the present benchmark is not set high enough and that some IMGs who are not yet communicatively competent are still joining Australian work environments with unsatisfactory communication skills. The qualitative analysis further investigated whether domain experts are competent to assess language proficiency separately from other professional skills (as stipulated by Australian federal government requirements). Some participants’ judgements (a minority overall) were influenced by views of candidates’ clinical competency, extending beyond the construct of communicative competence as defined by the OET. However, the qualitative findings, in the main, suggested that subject-matter experts were indeed attending to textual features related to what the OET is intended to measure. The central question of whether domain experts, without linguistic training, are well placed for setting the standards in a LSP test such as the OET was considered. Even though some participants’ judgements deviated somewhat from the current OET Writing sub-test criteria, validity evidence collected in this study verified, on the whole, that the subsequent new standards, derived from domain expert participation, were justified. The validity consequences of this study’s results for the OET Writing sub-test, and for LSP testing more generally, were reflected on using a unique argument-based validity framework posed by Knoch and Macqueen (in preparation). The practical and operational implications of the study’s findings for the OET were also considered. ; Open Access
Keyword: domain expert; health communication; language for specific purpose (LSP) test; standard setting; validity
URL: http://hdl.handle.net/11343/213877
BASE
Hide details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
1
0
0
0
0
© 2013 - 2021 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern