Yusra Mohammed Ali

Yusra Mohammed Ali

مطالب

فیلتر های جستجو: فیلتری انتخاب نشده است.
نمایش ۱ تا ۴ مورد از کل ۴ مورد.
۱.

Distractor Analysis in Multiple-Choice Items Using the Rasch Model

کلید واژه ها: Distractor analysis Item response theory Multiple-choice items Rasch model

حوزه های تخصصی:
تعداد بازدید : 600 تعداد دانلود : 533
Multiple-choice (MC) item format is commonly used in educational assessments due to its economy and effectiveness across a variety of content domains. However, numerous studies have examined the quality of MC items in high-stakes and higher education assessments and found many flawed items, especially in terms of distractors. These faulty items lead to misleading insights about the performance of students and the final decisions. The analysis of distractors is typically conducted in educational assessments with multiple-choice items to ensure high quality items are used as the basis of inference. Item response theory (IRT) and Rasch models have received little attention for analyzing distractors. For that reason, the purpose of the present study was to apply the Rasch model, to a grammar test to analyze items’ distractors of the test. To achieve this, the current study investigated the quality of 10 instructor-written MC grammar items used in an undergraduate final exam, using the items responses of 310 English as a foreign language (EFL) students who had taken part in an advanced grammar course. The results showed the acceptable fit to the Rasch model and high reliability. Malfunctioning distractors were identified.
۲.

Detecting Measurement Disturbance: Graphical Illustrations of Item Characteristic Curves

کلید واژه ها: Graphical displays item characteristic curves measurement disturbances model-data fit

حوزه های تخصصی:
تعداد بازدید : 999 تعداد دانلود : 142
Measurement disturbances refer to any conditions that affect the measurement of some psychological latent variables, which result in an inaccurate interpretation of item or person estimates derived from a measurement model. Measurement disturbances are mainly attributed to the characteristics of the person, the properties of the items, and the interaction between the characteristics of the person and the features of the items. Although numerous researchers have detected measurement disturbances in different contexts, too little attention has been devoted to exploring measurement disturbances within the context of language testing and assessment, especially using graphical displays. This study aimed to show the utility of graphical displays, which surpass numeric values of infit and outfit statistics given by the Rasch model, to explore measurement disturbances in a listening comprehension test. Results of the study showed two types of outcomes for examining graphical displays and their corresponding numeric fit values: congruent and incongruent associations. It turned out that graphical displays can provide diagnostic information about the performance of test items which might not be captured through numeric values.
۳.

Evaluating Measurement Invariance in the IELTS Listening Comprehension Test

کلید واژه ها: Differential Item Functioning IELTS measurement invariance Rasch model

حوزه های تخصصی:
تعداد بازدید : 689 تعداد دانلود : 868
Measurement invariance (MI) refers to the degree to which a measurement instrument or scale produces consistent results across different groups or populations. It basically shows whether the same construct is measured in the same way across different groups, such as different cultures, genders, or age groups. If MI is established, it means that scores on the test can be compared meaningfully across different groups. To establish MI mostly confirmatory factor analysis methods are used. In this study, we aim to examine MI using the Rasch model. The responses of 211 EFL learners to the listening section of the IETLS were examined for MI across gender and randomly selected subsamples. The item difficulty measures were compared graphically using the Rasch model. Findings showed that except for a few items, the IELTS listening items exhibit MI. Therefore, score comparisons across gender and other unknown subgroups are valid with the IELTS listening scores.
۴.

Psychometric Modelling of Reading Aloud with the Rasch Model

کلید واژه ها: Rasch partial credit model Reading aloud speaking test Validation

حوزه های تخصصی:
تعداد بازدید : 31 تعداد دانلود : 145
Reading aloud is recommended as a simple technique to measure speaking ability (Hughes & Hughes, 2020; Madsen, 1983). Reading aloud is currently used in the Pearson Test of English and a couple of other international English as a second language proficiency tests. Due to the simplicity of the technique, it can be used in conjunction with other techniques to measure foreign and second language learners’ speaking ability. One issue in reading aloud as a testing technique is its psychometric modelling. Because of the peculiar structure of reading aloud tasks, analysing them with item response theory models is not straightforward. In this study, the Rasch partial credit model (PCM) is suggested and used to score examinees’ reading aloud scores. The performances of 196 foreign language learners on five reading aloud passages were analysed with the PCM. Findings showed that the data fit the RPCM well and the scores are highly reliable. Implications of the study for psychometric evaluation of reading aloud or oral reading fluency are discussed.

پالایش نتایج جستجو

تعداد نتایج در یک صفحه:

درجه علمی

مجله

سال

حوزه تخصصی

زبان