NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 92 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Pentecost, Thomas C.; Raker, Jeffery R.; Murphy, Kristen L. – Practical Assessment, Research & Evaluation, 2023
Using multiple versions of an assessment has the potential to introduce item environment effects. These types of effects result in version dependent item characteristics (i.e., difficulty and discrimination). Methods to detect such effects and resulting implications are important for all levels of assessment where multiple forms of an assessment…
Descriptors: Item Response Theory, Test Items, Test Format, Science Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Aybek, Eren Can; Toraman, Cetin – International Journal of Assessment Tools in Education, 2022
The current study investigates the optimum number of response categories for the Likert type of scales under the item response theory (IRT). The data was collected from university students attend to mainly the faculty of medicine and the faculty of education. A form of the "Social Gender Equity Scale" developed by Gozutok et al. (2017)…
Descriptors: Likert Scales, Item Response Theory, College Students, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Azevedo, Jose Manuel; Oliveira, Ema P.; Beites, Patrícia Damas – International Journal of Information and Learning Technology, 2019
Purpose: The purpose of this paper is to find appropriate forms of analysis of multiple-choice questions (MCQ) to obtain an assessment method, as fair as possible, for the students. The authors intend to ascertain if it is possible to control the quality of the MCQ contained in a bank of questions, implemented in Moodle, presenting some evidence…
Descriptors: Learning Analytics, Multiple Choice Tests, Test Theory, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Camenares, Devin – International Journal for the Scholarship of Teaching and Learning, 2022
Balancing assessment of learning outcomes with the expectations of students is a perennial challenge in education. Difficult exams, in which many students perform poorly, exacerbate this problem and can inspire a wide variety of interventions, such as a grading curve. However, addressing poor performance can sometimes distort or inflate grades and…
Descriptors: College Students, Student Evaluation, Tests, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Barno S. Abdullaeva; Diyorjon Abdullaev; Nurislom I. Khursanov; Khurshida B. Kadirova; Laylo Djuraeva – International Journal of Language Testing, 2024
Cloze tests are commonly used in language testing as a quick measure of overall language ability or reading comprehension. A problem for the analysis of cloze tests with item response theory models is that cloze test items are locally dependent. This leads to the violation of the conditional or local independence assumption of IRT models. In this…
Descriptors: Cloze Procedure, Language Tests, Test Items, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Nishizawa, Hitoshi – Language Testing, 2023
In this study, I investigate the construct validity and fairness pertaining to the use of a variety of Englishes in listening test input. I obtained data from a post-entry English language placement test administered at a public university in the United States. In addition to expectedly familiar American English, the test features Hawai'i,…
Descriptors: Construct Validity, Listening Comprehension Tests, Language Tests, English (Second Language)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Guo, Hongwen; Rios, Joseph A.; Ling, Guangming; Wang, Zhen; Gu, Lin; Yang, Zhitong; Liu, Lydia O. – ETS Research Report Series, 2022
Different variants of the selected-response (SR) item type have been developed for various reasons (i.e., simulating realistic situations, examining critical-thinking and/or problem-solving skills). Generally, the variants of SR item format are more complex than the traditional multiple-choice (MC) items, which may be more challenging to test…
Descriptors: Test Format, Test Wiseness, Test Items, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Glamocic, Džana Salibašic; Mešic, Vanes; Neumann, Knut; Sušac, Ana; Boone, William J.; Aviani, Ivica; Hasovic, Elvedin; Erceg, Nataša; Repnik, Robert; Grubelnik, Vladimir – Physical Review Physics Education Research, 2021
Item banks are generally considered the basis of a new generation of educational measurement. In combination with specialized software, they can facilitate the computerized assembling of multiple pre-equated test forms. However, for advantages of item banks to become fully realized it is important that the item banks store a relatively large…
Descriptors: Item Banks, Test Items, Item Response Theory, Item Sampling
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ghio, Fernanda Belén; Bruzzone, Manuel; Rojas-Torres, Luis; Cupani, Marcos – European Journal of Science and Mathematics Education, 2022
In the last decades, the development of computerized adaptive testing (CAT) has allowed more precise measurements with a smaller number of items. In this study, we develop an item bank (IB) to generate the adaptive algorithm and simulate the functioning of CAT to assess the domains of mathematical knowledge in Argentinian university students…
Descriptors: Test Items, Item Banks, Adaptive Testing, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Hansen, John; Stewart, John – Physical Review Physics Education Research, 2021
This work is the fourth of a series of papers applying multidimensional item response theory (MIRT) to widely used physics conceptual assessments. This study applies MIRT analysis using both exploratory and confirmatory methods to the Brief Electricity and Magnetism Assessment (BEMA) to explore the assessment's structure and to determine a…
Descriptors: Item Response Theory, Science Tests, Energy, Magnets
Peer reviewed Peer reviewed
Direct linkDirect link
Ghaemi, Hamed – Language Testing in Asia, 2022
Listening comprehension in English, as one of the most fundamental skills, has an essential role in the process of learning English. Non-parametric item Response Theory (NIRT) is a probabilistic-nonparametric approach to item response theory (IRT) which determines the one-dimensionality and adaptability of test. NIRT techniques are a useful tool…
Descriptors: English (Second Language), Second Language Learning, Language Tests, Listening Comprehension Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Whitaker, Douglas; Barss, Joseph; Drew, Bailey – Online Submission, 2022
Challenges to measuring students' attitudes toward statistics remain despite decades of focused research. Measuring the expectancy-value theory (EVT) Cost construct has been especially challenging owing in part to the historical lack of research about it. To measure the EVT Cost construct better, this study asked university students to respond to…
Descriptors: Statistics Education, College Students, Student Attitudes, Likert Scales
Wenyue Ma – ProQuest LLC, 2023
Foreign language placement testing, an important component in university foreign language programs, has received considerable, but not copious, attention over the years in second language (L2) testing research (Norris, 2004), and it has been mostly concentrated on L2 English. In contrast to validation research on L2 English placement testing, the…
Descriptors: Second Language Learning, Chinese, Student Placement, Placement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Traxler, Adrienne; Henderson, Rachel; Stewart, John; Stewart, Gay; Papak, Alexis; Lindell, Rebecca – Physical Review Physics Education Research, 2018
Research on the test structure of the Force Concept Inventory (FCI) has largely ignored gender, and research on FCI gender effects (often reported as "gender gaps") has seldom interrogated the structure of the test. These rarely crossed streams of research leave open the possibility that the FCI may not be structurally valid across…
Descriptors: Physics, Science Instruction, Sex Fairness, Gender Differences
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sarallah Jafaripour; Omid Tabatabaei; Hadi Salehi; Hossein Vahid Dastjerdi – International Journal of Language Testing, 2024
The purpose of this study was to examine gender and discipline-based Differential Item Functioning (DIF) and Differential Distractor Functioning (DDF) on the Islamic Azad University English Proficiency Test (IAUEPT). The study evaluated DIF and DDF across genders and disciplines using the Rasch model. To conduct DIF and DDF analysis, the examinees…
Descriptors: Item Response Theory, Test Items, Language Tests, Language Proficiency
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7