DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5...21
Hits 1 – 20 of 408

1
Psychometric Properties of the Spanish Version of the Highly Sensitive Child Scale: The Parent Version
In: International Journal of Environmental Research and Public Health; Volume 19; Issue 5; Pages: 3101 (2022)
BASE
Show details
2
Developing a Technology-Based Classroom Assessment of Academic Reading Skills for English Language Learners and Teachers: Validity Evidence for Formative Use
In: Languages; Volume 7; Issue 2; Pages: 71 (2022)
BASE
Show details
3
Overall, a Good Test, but…—Swedish Lower Secondary Teachers’ Perceptions and Use of National Test Results of English
In: Languages; Volume 7; Issue 1; Pages: 64 (2022)
BASE
Show details
4
The Psychometric Properties and Cutoff Score of the Child and Adolescent Mindfulness Measure (CAMM) in Chinese Primary School Students
In: Children; Volume 9; Issue 4; Pages: 499 (2022)
BASE
Show details
5
Evaluating Perceptions towards the Consequential Validity of Integrated Language Proficiency Assessment
In: Languages; Volume 7; Issue 1; Pages: 65 (2022)
BASE
Show details
6
Reliability and Validity of the Malaysian English Version of the Diagnostic Criteria for Temporomandibular Disorder (M-English DC/TMD)
In: Healthcare; Volume 10; Issue 2; Pages: 329 (2022)
BASE
Show details
7
Matching EFL learners with appropriate levels of reading materials : Backing for using Extensive Reading Placement/Progress Test ; EFL学習者の言語能力レベルに適した多読マテリアル : EPERテストの妥当性研究
Yoshizawa Kiyomi; 吉澤 清美. - : 関西大学外国語学部, 2022
BASE
Show details
8
Eye-tracking L2 students taking online multiple-choice reading tests: benefits and challenges
Latimer, Nicola; Chan, Sathena Hiu Chong. - : Cranmore Publishing, 2022
BASE
Show details
9
Psychometric Properties of the Spanish Version of the Highly Sensitive Child Scale: The Parent Version
BASE
Show details
10
Discovering Early de Finetti’s Writings on Trivalent Theory of Conditionals
In: Argumenta, Journal of analytic philosophy ; https://hal.archives-ouvertes.fr/hal-03363111 ; Argumenta, Journal of analytic philosophy, Department of Humanities and Social Sciences, University of Sassari, 2021, 6, pp.267-291. ⟨10.14275/2465-2334/202112.bar⟩ (2021)
BASE
Show details
11
ОСНОВНЫЕ ХАРАКТЕРИСТИКИ ЯЗЫКОВОГО ТЕСТА ДЛЯ ЗАНЯТИЙ ПО АНГЛИЙСКОМУ ЯЗЫКУ В ВУЗЕ ... : MAIN FEATURES OF A LINGUISTIC TEST FOR ENGLISH STUDIES IN UNIVERSITIES ...
Елена Вячеславовна Швецова. - : Образование. Наука. Научные кадры, 2021
BASE
Show details
12
Методика диагностики диалектического мыслительного действия смены альтернативы ... : Method of diagnostics of dialectical mental action of alternative change ...
Баянова Лариса Фаритовна; Хаматвалеева Динара Гумаровна; Шевкунова Анастасия Евгеньевна. - : Современное дошкольное образование. Теория и практика, 2021
BASE
Show details
13
Discovering early de Finetti’s writings on trivalent theory of conditionals ...
Baratgin, Jean. - : Università degli studi di Sassari, 2021
BASE
Show details
14
Lost in translation: Qualitative data collecting and translating challenges in multilingual settings in information systems research
BASE
Show details
15
Cross-cultural cognitive assessment of dementia: a meta-analysis of the impact of illiteracy on dementia screening and an evaluation of a transcultural short-term memory assessment ...
Maher, Caragh. - : The University of Edinburgh, 2021
BASE
Show details
16
VALIDITY OF STUDENT WORKSHEETSWITH THE THEME OF ENERGY IN DAILY LIFE BY PROBLEM BASED LEARNING OF INTEGRATED IN 21ST CENTURY LEARNING ...
Gusti, Dian Arima; Ratnawulan. - : Zenodo, 2021
BASE
Show details
17
VALIDITY OF STUDENT WORKSHEETSWITH THE THEME OF ENERGY IN DAILY LIFE BY PROBLEM BASED LEARNING OF INTEGRATED IN 21ST CENTURY LEARNING ...
Gusti, Dian Arima; Ratnawulan. - : Zenodo, 2021
BASE
Show details
18
Language neutrality of the LLAMA test explored: The case of agglutinative languages and multiple writing systems
In: Journal of the European Second Language Association; Vol 5, No 1 (2021); 87–100 ; 2399-9101 (2021)
BASE
Show details
19
Investigating reliability and construct validity of a source-based academic writing test for placement purposes
In: Graduate Theses and Dissertations (2021)
Abstract: Source-based writing, in which writers read or listen to academic content before writing, has been considered to better assess academic writing skills than independent writing tasks (Read, 1990; Weigle, 2004). Because scores resulting from ratings of test takers’ source-based writing task responses are treated as indicators of their academic writing ability, researchers have begun to investigate the meaning of scores on source-based academic writing tests in an attempt to define the construct measured on such tests. Although this research has resulted in insights about source-based writing constructs and the rating reliability of such tests, it has been limited in its research perspective, the methods for collecting data about the rating process, and the clarity of the connection between reliability and construct validity. This study aimed to collect and analyze evidence regarding the reliability and construct validity of a source-based academic English test for placement purposes, called the EPT Writing, and to show the relationship between these two parts of the study by presenting the evidence in a validity argument (Kane, 1992, 2006, 2013). Specifically, important reliability aspects, including the appropriateness of the rating rubric based on raters’ opinions and statistical evidence, the performance of the raters in terms of severity, consistency, and bias, as well as test score reliability, were examined. Also, the construct of academic source-based writing assessed by the EPT Writing was explored by analysis of the writing features that raters attended to while rating test takers’ responses. The study employed the mixed-methods multiphase research design (Creswell & Plano Clark, 2012) in which both quantitative and qualitative data were collected and analyzed in two sequential phases to address the research questions. In Phase 1, quantitative data, consisting of 1,300 operational ratings provided by the EPT Office, were analyzed using Many-Facets Rasch Measurement (MFRM) and Generalizability theory to address the research questions related to the rubric’s functionality, raters’ performance, and score reliability. In Phase 2, 630 experimental ratings, 90 stimulated recalls collected with assistance from records from eye-tracking technology, as well as nine interviews from nine raters were analyzed to address the research questions pertaining to raters’ opinions of the rubric and the writing features that attracted raters’ attention during rating. The findings were presented in a validity argument to show the connection between the reliability of the ratings and the construct validity, which needs to be taken into account in research on rating processes. Overall, the raters’ interviews and MFRM analysis of the operational ratings showed that the rubric was mostly appropriate for providing evidence of variation in source-based academic writing ability. Regarding raters’ performance, MRFM analysis revealed that while most raters maintained their comparability and consistency in terms of severity, and impartiality towards the writing tasks, some of them were significantly more generous, inconsistent, and biased against task types. The score reliability estimate for a 2-task x 2-rater design was found below the desired level, suggesting that more tasks and raters are needed to increase reliability. Additionally, analysis of the verbal reports indicated that the raters attended to the writing features aligned with the source-based academic writing construct that the test aims to measure. The conclusion presents a partial validity framework for the EPT Writing, in addition to implications for construct definition of source-based academic writing tests, cognition research methods, and language assessment validation research. Recommendations for the EPR Writing include a clearer definition of the test construct, revision of the rubric, and more rigorous rater training. Suggested directions for future research include further research investigating raters’ cognition in source-based writing assessment and additional validation studies for other inferences of the validity framework for the EPT Writing.
Keyword: construct validity; eye tracking; mixed methods; reliability; source-based writing; validity argument
URL: https://lib.dr.iastate.edu/cgi/viewcontent.cgi?article=9580&context=etd
https://lib.dr.iastate.edu/etd/18573
BASE
Hide details
20
The Khoekhoegowab Personality Inventory: The Comparative Validity of a Locally Derived Measure of Traits ...
Thalmayer, Amber Gayle; Saucier, Gerard; Shino, Elizabeth. - : Humboldt-Universität zu Berlin, 2021
BASE
Show details

Page: 1 2 3 4 5...21

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
408
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern