NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 49 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Atar, Burcu; Atalay Kabasakal, Kubra; Kibrislioglu Uysal, Nermin – Journal of Experimental Education, 2023
The purpose of this study was to evaluate the population invariance of equating functions across country subgroups in TIMSS 2015 mathematics tests in relation to the raw-score distribution, DIF, and DTF. We used equipercentile and IRT observed-score equating methods. The results of the study indicate that there is a relationship between the…
Descriptors: Foreign Countries, Achievement Tests, International Assessment, Mathematics Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Guo, Hongwen; Rios, Joseph A.; Ling, Guangming; Wang, Zhen; Gu, Lin; Yang, Zhitong; Liu, Lydia O. – ETS Research Report Series, 2022
Different variants of the selected-response (SR) item type have been developed for various reasons (i.e., simulating realistic situations, examining critical-thinking and/or problem-solving skills). Generally, the variants of SR item format are more complex than the traditional multiple-choice (MC) items, which may be more challenging to test…
Descriptors: Test Format, Test Wiseness, Test Items, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Saatcioglu, Fatima Munevver; Sen, Sedat – International Journal of Testing, 2023
In this study, we illustrated an application of the confirmatory mixture IRT model for multidimensional tests. We aimed to examine the differences in student performance by domains with a confirmatory mixture IRT modeling approach. A three-dimensional and three-class model was analyzed by assuming content domains as dimensions and cognitive…
Descriptors: Item Response Theory, Foreign Countries, Elementary Secondary Education, Achievement Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Erturk, Zafer; Oyar, Esra – International Journal of Assessment Tools in Education, 2021
Studies aiming to make cross-cultural comparisons first should establish measurement invariance in the groups to be compared because results obtained from such comparisons may be artificial in the event that measurement invariance cannot be established. The purpose of this study is to investigate the measurement invariance of the data obtained…
Descriptors: International Assessment, Foreign Countries, Attitude Measures, Mathematics
Peer reviewed Peer reviewed
Direct linkDirect link
Toland, Michael D.; Li, Caihong; Kodet, Jonathan; Reese, Robert J. – Measurement and Evaluation in Counseling and Development, 2021
We assessed the psychometric properties of the visual analog scale (VAS) form of the Outcome Rating Scale (ORS) using item response theory (IRT) with a sample of 2,109 Canadian counseling clients. Findings indicate that the ORS was unidimensional and that the equal-interval assumption was not tenable for the VAS.
Descriptors: Rating Scales, Psychometrics, Item Response Theory, Psychotherapy
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Fukuzawa, Sherry; deBraga, Michael – Journal of Curriculum and Teaching, 2019
Graded Response Method (GRM) is an alternative to multiple-choice testing where students rank options according to their relevance to the question. GRM requires discrimination and inference between statements and is a cost-effective critical thinking assessment in large courses where open-ended answers are not feasible. This study examined…
Descriptors: Alternative Assessment, Multiple Choice Tests, Test Items, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Aksu, Gökhan; Güzellerii, Cem Oktay – International Journal of Progressive Education, 2019
The purpose of this study is to analyze the studies, which include Item Response Theory among the keywords, available in the Web of Science database between 1980-2018 through bibliometric analysis method. A total of 1,367 academic works has been analyzed. The authors, journals and countries having the highest number of studies in the field and…
Descriptors: Item Response Theory, Scientific Concepts, Science Instruction, Databases
Peer reviewed Peer reviewed
Direct linkDirect link
Thompson, James J. – Measurement: Interdisciplinary Research and Perspectives, 2022
With the use of computerized testing, ordinary assessments can capture both answer accuracy and answer response time. For the Canadian Programme for the International Assessment of Adult Competencies (PIAAC) numeracy and literacy subtests, person ability, person speed, question difficulty, question time intensity, fluency (rate), person fluency…
Descriptors: Foreign Countries, Adults, Computer Assisted Testing, Network Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Shin, Jinnie; Bulut, Okan; Gierl, Mark J. – Journal of Experimental Education, 2020
The arrangement of response options in multiple-choice (MC) items, especially the location of the most attractive distractor, is considered critical in constructing high-quality MC items. In the current study, a sample of 496 undergraduate students taking an educational assessment course was given three test forms consisting of the same items but…
Descriptors: Foreign Countries, Undergraduate Students, Multiple Choice Tests, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wang, Peiyu; Coetzee, Karen; Strachan, Andrea; Monteiro, Sandra; Cheng, Liying – Canadian Journal of Applied Linguistics / Revue canadienne de linguistique appliquée, 2020
Internationally educated nurses' (IENs) English language proficiency is critical to professional licensure as communication is a key competency for safe practice. The Canadian English Language Benchmark Assessment for Nurses (CELBAN) is Canada's only Canadian Language Benchmarks (CLB) referenced examination used in the context of healthcare…
Descriptors: Item Response Theory, Language Tests, English (Second Language), Nurses
Peer reviewed Peer reviewed
Direct linkDirect link
Scribner, Emily D.; Harris, Sara E. – Journal of Geoscience Education, 2020
The Mineralogy Concept Inventory (MCI) is a statistically validated 18-question assessment that can be used to measure learning gains in introductory mineralogy courses. Development of the MCI was an iterative process involving expert consultation, student interviews, assessment deployment, and statistical analysis. Experts at the two universities…
Descriptors: Undergraduate Students, Mineralogy, Introductory Courses, Science Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Karadavut, Tugba; Cohen, Allan S.; Kim, Seock-Ho – International Journal of Assessment Tools in Education, 2019
Covariates have been used in mixture IRT models to help explain why examinees are classed into different latent classes. Previous research has considered manifest variables as covariates in a mixture Rasch analysis for prediction of group membership. Latent covariates, however, are more likely to have higher correlations with the latent class…
Descriptors: Item Response Theory, Classification, Correlation, International Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
McIntosh, James – Scandinavian Journal of Educational Research, 2019
This article examines whether the way that PISA models item outcomes in mathematics affects the validity of its country rankings. As an alternative to PISA methodology a two-parameter model is applied to PISA mathematics item data from Canada and Finland for the year 2012. In the estimation procedure item difficulty and dispersion parameters are…
Descriptors: Foreign Countries, Achievement Tests, Secondary School Students, International Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Chalmers, R. Philip; Counsell, Alyssa; Flora, David B. – Educational and Psychological Measurement, 2016
Differential test functioning, or DTF, occurs when one or more items in a test demonstrate differential item functioning (DIF) and the aggregate of these effects are witnessed at the test level. In many applications, DTF can be more important than DIF when the overall effects of DIF at the test level can be quantified. However, optimal statistical…
Descriptors: Test Bias, Sampling, Test Items, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Goldhammer, Frank; Martens, Thomas; Lüdtke, Oliver – Large-scale Assessments in Education, 2017
Background: A potential problem of low-stakes large-scale assessments such as the Programme for the International Assessment of Adult Competencies (PIAAC) is low test-taking engagement. The present study pursued two goals in order to better understand conditioning factors of test-taking disengagement: First, a model-based approach was used to…
Descriptors: Student Evaluation, International Assessment, Adults, Competence
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4