NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 42 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kaya Uyanik, Gulden; Demirtas Tolaman, Tugba; Gur Erdogan, Duygu – International Journal of Assessment Tools in Education, 2021
This paper aims to examine and assess the questions included in the "Turkish Common Exam" for sixth graders held in the first semester of 2018 which is one of the common exams carried out by The Measurement and Evaluation Centers, in terms of question structure, quality and taxonomic value. To this end, the test questions were examined…
Descriptors: Foreign Countries, Grade 6, Standardized Tests, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Avsar, Asiye Sengül – Participatory Educational Research, 2022
It is necessary to supply proof regarding the construct validity of the scales. Especially, when new scales are developed the construct validity is researched by the Exploratory Factor Analysis (EFA). Generally, factor extraction is performed via the Principal Component Analysis (PCA) which is not exactly factor analysis and the Principal Axis…
Descriptors: Factor Analysis, Automation, Construct Validity, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sari, Halil Ibrahim; Karaman, Mehmet Akif – International Journal of Assessment Tools in Education, 2018
The current study shows the applications of both classical test theory (CTT) and item response theory (IRT) to the psychology data. The study discusses item level analyses of General Mattering Scale produced by the two theories as well as strengths and weaknesses of both measurement approaches. The survey consisted of a total of five Likert-type…
Descriptors: Measures (Individuals), Test Theory, Item Response Theory, Likert Scales
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ozberk, Eren Halil; Unsal Ozberk, Elif Bengi; Uluc, Sait; Oktem, Ferhunde – International Journal of Assessment Tools in Education, 2021
The Kaufman Brief Intelligence Test--Second Edition (KBIT-2) is designed to measure verbal and nonverbal abilities in a wide range of individuals from 4 years 0 months to 90 years 11 months of age. This study examines both the advantages of using Mokken Scale Analysis (MSA) in intelligence tests and the hierarchical order of the items in the…
Descriptors: Intelligence Tests, Nonparametric Statistics, Test Items, Test Construction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ilhan, Mustafa; Guler, Nese – Eurasian Journal of Educational Research, 2018
Purpose: This study aimed to compare difficulty indices calculated for open-ended items in accordance with the classical test theory (CTT) and the Many-Facet Rasch Model (MFRM). Although theoretical differences between CTT and MFRM occupy much space in the literature, the number of studies empirically comparing the two theories is quite limited.…
Descriptors: Difficulty Level, Test Items, Test Theory, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Deniz, Kaan Zulfikar; Ilican, Emel – International Journal of Assessment Tools in Education, 2021
This study aims to compare the G and Phi coefficients as estimated by D studies for a measurement tool with the G and Phi coefficients obtained from real cases in which items of differing difficulty levels were added and also to determine the conditions under which the D studies estimated reliability coefficients closer to reality. The study group…
Descriptors: Generalizability Theory, Test Items, Difficulty Level, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Saatcioglu, Fatima Munevver; Sen, Sedat – International Journal of Testing, 2023
In this study, we illustrated an application of the confirmatory mixture IRT model for multidimensional tests. We aimed to examine the differences in student performance by domains with a confirmatory mixture IRT modeling approach. A three-dimensional and three-class model was analyzed by assuming content domains as dimensions and cognitive…
Descriptors: Item Response Theory, Foreign Countries, Elementary Secondary Education, Achievement Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mor, Ezgi; Kula-Kartal, Seval – International Journal of Assessment Tools in Education, 2022
The dimensionality is one of the most investigated concepts in the psychological assessment, and there are many ways to determine the dimensionality of a measured construct. The Automated Item Selection Procedure (AISP) and the DETECT are non-parametric methods aiming to determine the factorial structure of a data set. In the current study,…
Descriptors: Psychological Evaluation, Nonparametric Statistics, Test Items, Item Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sayin, Ayfer; Sata, Mehmet – International Journal of Assessment Tools in Education, 2022
The aim of the present study was to examine Turkish teacher candidates' competency levels in writing different types of test items by utilizing Rasch analysis. In addition, the effect of the expertise of the raters scoring the items written by the teacher candidates was examined within the scope of the study. 84 Turkish teacher candidates…
Descriptors: Foreign Countries, Item Response Theory, Evaluators, Expertise
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ayva Yörü, Fatma Gökçen; Atar, Hakan Yavuz – Journal of Pedagogical Research, 2019
The aim of this study is to examine whether the items in the mathematics subtest of the Centralized High School Entrance Placement Test [HSEPT] administered in 2012 by the Ministry of National Education in Turkey show DIF according to gender and type of school. For this purpose, SIBTEST, Breslow-Day, Lord's [chi-squared] and Raju's area…
Descriptors: Test Bias, Mathematics Tests, Test Items, Gender Differences
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Atilgan, Hakan; Demir, Elif Kübra; Ogretmen, Tuncay; Basokcu, Tahsin Oguz – International Journal of Progressive Education, 2020
It has become a critical question what the reliability level would be when open-ended questions are used in large-scale selection tests. One of the aims of the present study is to determine what the reliability would be in the event that the answers given by test-takers are scored by experts when open-ended short answer questions are used in…
Descriptors: Foreign Countries, Secondary School Students, Test Items, Test Reliability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Soysal, Sumeyra; Yilmaz Kogar, Esin – International Journal of Assessment Tools in Education, 2021
In this study, whether item position effects lead to DIF in the condition where different test booklets are used was investigated. To do this the methods of Lord's chi-square and Raju's unsigned area with the 3PL model under with and without item purification were used. When the performance of the methods was compared, it was revealed that…
Descriptors: Item Response Theory, Test Bias, Test Items, Comparative Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sahin, Melek Gülsah; Yildirim, Yildiz; Boztunç Öztürk, Nagihan – Participatory Educational Research, 2023
Literature review shows that the development process of an achievement test is mainly investigated in dissertations. Moreover, preparing a form that will shed light on developing an achievement test is expected to guide those who will administer the test. In this line, the current study aims to create an "Achievement Test Development Process…
Descriptors: Achievement Tests, Test Construction, Records (Forms), Mathematics Achievement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dirlik, Ezgi Mor – International Journal of Progressive Education, 2020
Mokken models have recently started to become the preferred method of researchers from different fields in studies of nonparametric item response theory (NIRT). Despite increasing application of these models, some features of this type of modelling need further study and explanation. Invariant item ordering (IIO) is one of these areas, which the…
Descriptors: Item Response Theory, Test Items, Nonparametric Statistics, Scoring
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bozdag, Hüseyin Cihan; Türkoguz, Suat – International Online Journal of Primary Education, 2021
The study determines the conceptual understanding levels of primary school students on the concept of light according to the Rasch Model with a Four-tier Light Conceptual Understanding Test (LCUT). The participants were 355 (164 girls and 191 boys) primary school students studying at a public school in Izmir city center. In the study, the Rasch…
Descriptors: Foreign Countries, Elementary School Students, Grade 5, Item Response Theory
Previous Page | Next Page »
Pages: 1  |  2  |  3