Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 7 |
Since 2006 (last 20 years) | 11 |
Descriptor
Test Items | 43 |
Item Response Theory | 19 |
Test Construction | 18 |
Computer Assisted Testing | 12 |
Test Theory | 12 |
Latent Trait Theory | 11 |
Item Analysis | 10 |
Test Validity | 10 |
Literature Reviews | 9 |
Test Bias | 8 |
Achievement Tests | 7 |
More ▼ |
Source
Author
Haladyna, Tom | 2 |
Hambleton, Ronald K. | 2 |
Roid, Gale | 2 |
Roid, Gale H. | 2 |
Ajideh, Parviz | 1 |
Alagoz, Cigdem | 1 |
Baird, Jo-Anne | 1 |
Ben-Porath, Yossef S. | 1 |
Blanco, María Paz | 1 |
Boztunç Öztürk, Nagihan | 1 |
Bracken, Bruce A. | 1 |
More ▼ |
Publication Type
Information Analyses | 43 |
Journal Articles | 26 |
Speeches/Meeting Papers | 13 |
Reports - Research | 11 |
Reports - Evaluative | 8 |
Opinion Papers | 4 |
Guides - Non-Classroom | 2 |
Reference Materials -… | 1 |
Reports - Descriptive | 1 |
Education Level
Elementary Secondary Education | 2 |
Higher Education | 2 |
Secondary Education | 1 |
Audience
Researchers | 4 |
Practitioners | 1 |
Teachers | 1 |
Location
Iran | 1 |
Minnesota | 1 |
Netherlands | 1 |
Turkey | 1 |
Laws, Policies, & Programs
Assessments and Surveys
ACT Assessment | 1 |
International English… | 1 |
Minnesota Multiphasic… | 1 |
National Assessment of… | 1 |
Program for International… | 1 |
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Eray Selçuk; Ergül Demir – International Journal of Assessment Tools in Education, 2024
This research aims to compare the ability and item parameter estimations of Item Response Theory according to Maximum likelihood and Bayesian approaches in different Monte Carlo simulation conditions. For this purpose, depending on the changes in the priori distribution type, sample size, test length, and logistics model, the ability and item…
Descriptors: Item Response Theory, Item Analysis, Test Items, Simulation
Lions, Séverin; Monsalve, Carlos; Dartnell, Pablo; Blanco, María Paz; Ortega, Gabriel; Lemarié, Julie – Applied Measurement in Education, 2022
Multiple-choice tests are widely used in education, often for high-stakes assessment purposes. Consequently, these tests should be constructed following the highest standards. Many efforts have been undertaken to advance item-writing guidelines intended to improve tests. One important issue is the unwanted effects of the options' position on test…
Descriptors: Multiple Choice Tests, High Stakes Tests, Test Construction, Guidelines
Cintron, Dakota W. – ETS Research Report Series, 2021
The extent to which a test's time limit alters a test taker's performance is known as speededness. The manifestation of speededness, or speeded behavior on a test, can be in the form of random guessing, leaving a substantial proportion of test items unanswered, or rushed test-taking behavior in general. Speeded responses do not depend solely on a…
Descriptors: Classification, Research and Development, Timed Tests, Guessing (Tests)
Sahin, Melek Gülsah; Yildirim, Yildiz; Boztunç Öztürk, Nagihan – Participatory Educational Research, 2023
Literature review shows that the development process of an achievement test is mainly investigated in dissertations. Moreover, preparing a form that will shed light on developing an achievement test is expected to guide those who will administer the test. In this line, the current study aims to create an "Achievement Test Development Process…
Descriptors: Achievement Tests, Test Construction, Records (Forms), Mathematics Achievement
Huu Thanh Minh Nguyen; Nguyen Van Anh Le – TESL-EJ, 2024
Comparing language tests and test preparation materials holds important implications for the latter's validity and reliability. However, not enough studies compare such materials across a wide range of indices. Therefore, this study investigated the text complexity of IELTS academic reading tests (IRT) and IELTS reading practice tests (IRPrT).…
Descriptors: Second Language Learning, English (Second Language), Language Tests, Readability
Gierl, Mark J.; Bulut, Okan; Guo, Qi; Zhang, Xinxin – Review of Educational Research, 2017
Multiple-choice testing is considered one of the most effective and enduring forms of educational assessment that remains in practice today. This study presents a comprehensive review of the literature on multiple-choice testing in education focused, specifically, on the development, analysis, and use of the incorrect options, which are also…
Descriptors: Multiple Choice Tests, Difficulty Level, Accuracy, Error Patterns
Hopfenbeck, Therese N.; Lenkeit, Jenny; El Masri, Yasmine; Cantrell, Kate; Ryan, Jeanne; Baird, Jo-Anne – Scandinavian Journal of Educational Research, 2018
International large-scale assessments are on the rise, with the Programme for International Student Assessment (PISA) seen by many as having strategic prominence in education policy debates. The present article reviews PISA-related English-language peer-reviewed articles from the programme's first cycle in 2000 to its most current in 2015. Five…
Descriptors: Foreign Countries, Achievement Tests, International Assessment, Secondary School Students
Ajideh, Parviz; Farrokhi, Farahman; Nourdad, Nava – World Journal of Education, 2012
Dynamic assessment as a complementary approach to traditional static assessment emphasizes the learning process and accounts for the amount and nature of examiner investment. The present qualitative study analyzed interactions for 270 reading test items which were recorded and tape scripted. The reading ability of 9 EFL participants at three…
Descriptors: English (Second Language), Second Language Learning, Learning Processes, Evaluation Methods
Stone, Elizabeth; Davey, Tim – Educational Testing Service, 2011
There has been an increased interest in developing computer-adaptive testing (CAT) and multistage assessments for K-12 accountability assessments. The move to adaptive testing has been met with some resistance by those in the field of special education who express concern about routing of students with divergent profiles (e.g., some students with…
Descriptors: Disabilities, Adaptive Testing, Accountability, Computer Assisted Testing
Kim, Seock-Ho; Cohen, Allan S.; Alagoz, Cigdem; Kim, Sukwoo – Journal of Educational Measurement, 2007
Data from a large-scale performance assessment (N = 105,731) were analyzed with five differential item functioning (DIF) detection methods for polytomous items to examine the congruence among the DIF detection methods. Two different versions of the item response theory (IRT) model-based likelihood ratio test, the logistic regression likelihood…
Descriptors: Performance Based Assessment, Performance Tests, Item Response Theory, Test Bias
Hambleton, Ronald K.; Swaminathan, H. – 1985
Comments are made on the review papers presented by six Dutch psychometricians: Ivo Molenaar, Wim van der Linden, Ed Roskam, Arnold Van den Wollenberg, Gideon Mellenbergh, and Dato de Gruijter. Molenaar has embraced a pragmatic viewpoint on Bayesian methods, using both empirical and pure approaches to solve educational research problems. Molenaar…
Descriptors: Bayesian Statistics, Decision Making, Elementary Secondary Education, Foreign Countries
Forbey, Johnathan D.; Ben-Porath, Yossef S. – Psychological Assessment, 2007
Computerized adaptive testing in personality assessment can improve efficiency by significantly reducing the number of items administered to answer an assessment question. Two approaches have been explored for adaptive testing in computerized personality assessment: item response theory and the countdown method. In this article, the authors…
Descriptors: Personality Traits, Computer Assisted Testing, Test Validity, Personality Assessment

Roid, Gale; Haladyna, Tom – Review of Educational Research, 1980
A continuum of item-writing methods is proposed ranging from informal-subjective methods to algorithmic-objective methods. Examples of techniques include objective-based item writing, amplified objectives, item forms, facet design, domain-referenced concept testing, and computerized techniques. (Author/CP)
Descriptors: Achievement Tests, Algorithms, Computer Assisted Testing, Criterion Referenced Tests

Whitely, Susan E. – Intelligence, 1980
This article examines the potential contribution of latent trait models to the study of intelligence. Nontechnical introductions to both unidimensional and multidimensional latent trait models are given. Multidimensional latent trait models can be used to test alternative multiple component theories of test item processing. (Author/CTM)
Descriptors: Ability, Aptitude Tests, Cognitive Processes, Intelligence

Leary, Linda F.; Dorans, Neil J. – Review of Educational Research, 1985
Research on the potential effects of different item arrangement schemes on item statistics is reviewed for three separate periods. Earliest studies investigated the simple main effect of item order on test performance. The late 1960s emphasized interactions between item order and examinees' characteristics. Current concern focuses on item…
Descriptors: Achievement Tests, Aptitude Tests, Item Analysis, Latent Trait Theory