Publication Date
In 2025 | 0 |
Since 2024 | 8 |
Since 2021 (last 5 years) | 31 |
Since 2016 (last 10 years) | 62 |
Since 2006 (last 20 years) | 102 |
Descriptor
Source
Author
Plake, Barbara S. | 4 |
Powers, Donald E. | 4 |
Katz, Irvin R. | 3 |
Keehner, Madeleine | 3 |
Moon, Jung Aa | 3 |
Rogers, W. Todd | 3 |
Bridgeman, Brent | 2 |
Crisp, Victoria | 2 |
Goldhammer, Frank | 2 |
Guo, Hongwen | 2 |
Higgins, Jennifer | 2 |
More ▼ |
Publication Type
Education Level
Higher Education | 30 |
Postsecondary Education | 23 |
Secondary Education | 15 |
High Schools | 10 |
Elementary Education | 8 |
Middle Schools | 8 |
Junior High Schools | 6 |
Grade 8 | 3 |
Grade 9 | 3 |
Intermediate Grades | 3 |
Adult Education | 2 |
More ▼ |
Audience
Practitioners | 14 |
Teachers | 14 |
Students | 13 |
Parents | 3 |
Researchers | 2 |
Administrators | 1 |
Location
Canada | 8 |
United States | 4 |
California | 3 |
Georgia | 3 |
Germany | 3 |
Florida | 2 |
Malaysia | 2 |
Oregon | 2 |
Taiwan | 2 |
Turkey | 2 |
United Kingdom | 2 |
More ▼ |
Laws, Policies, & Programs
Americans with Disabilities… | 1 |
Elementary and Secondary… | 1 |
No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Kuan-Yu Jin; Thomas Eckes – Educational and Psychological Measurement, 2024
Insufficient effort responding (IER) refers to a lack of effort when answering survey or questionnaire items. Such items typically offer more than two ordered response categories, with Likert-type scales as the most prominent example. The underlying assumption is that the successive categories reflect increasing levels of the latent variable…
Descriptors: Item Response Theory, Test Items, Test Wiseness, Surveys
Zhang, Susu; Li, Anqi; Wang, Shiyu – Educational Measurement: Issues and Practice, 2023
In computer-based tests allowing revision and reviews, examinees' sequence of visits and answer changes to questions can be recorded. The variable-length revision log data introduce new complexities to the collected data but, at the same time, provide additional information on examinees' test-taking behavior, which can inform test development and…
Descriptors: Computer Assisted Testing, Test Construction, Test Wiseness, Test Items
Semere Kiros Bitew; Amir Hadifar; Lucas Sterckx; Johannes Deleu; Chris Develder; Thomas Demeester – IEEE Transactions on Learning Technologies, 2024
Multiple-choice questions (MCQs) are widely used in digital learning systems, as they allow for automating the assessment process. However, owing to the increased digital literacy of students and the advent of social media platforms, MCQ tests are widely shared online, and teachers are continuously challenged to create new questions, which is an…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Test Construction, Test Items
Lauritz Schewior; Marlit Annalena Lindner – Educational Psychology Review, 2024
Studies have indicated that pictures in test items can impact item-solving performance, information processing (e.g., time on task) and metacognition as well as test-taking affect and motivation. The present review aims to better organize the existing and somewhat scattered research on multimedia effects in testing and problem solving while…
Descriptors: Multimedia Materials, Computer Assisted Testing, Test Items, Pictorial Stimuli
Carolyn Clarke – in education, 2024
This ethnographic case study, situated in Newfoundland and Labrador, Canada, examined the effects of full-scale provincial testing on families, its influences on homework, and familial accountability for teaching and learning. Data were drawn from family interviews, as well as letters and documents regarding homework. Teachers sensed a significant…
Descriptors: Academic Standards, Accountability, Testing, Homework
Scott P. Ardoin; Katherine S. Binder; Paulina A. Kulesz; Eloise Nimocks; Joshua A. Mellott – Grantee Submission, 2024
Understanding test-taking strategies (TTSs) and the variables that influence TTSs is crucial to understanding what reading comprehension tests measure. We examined how passage and student characteristics were associated with TTSs and their impact on response accuracy. Third (n = 78), fifth (n = 86), and eighth (n = 86) graders read and answered…
Descriptors: Test Wiseness, Eye Movements, Reading Comprehension, Reading Tests
Demirkaya, Onur; Bezirhan, Ummugul; Zhang, Jinming – Journal of Educational and Behavioral Statistics, 2023
Examinees with item preknowledge tend to obtain inflated test scores that undermine test score validity. With the availability of process data collected in computer-based assessments, the research on detecting item preknowledge has progressed on using both item scores and response times. Item revisit patterns of examinees can also be utilized as…
Descriptors: Test Items, Prior Learning, Knowledge Level, Reaction Time
Ella Anghel; Lale Khorramdel; Matthias von Davier – Large-scale Assessments in Education, 2024
As the use of process data in large-scale educational assessments is becoming more common, it is clear that data on examinees' test-taking behaviors can illuminate their performance, and can have crucial ramifications concerning assessments' validity. A thorough review of the literature in the field may inform researchers and practitioners of…
Descriptors: Educational Assessment, Test Validity, Test Items, Reaction Time
Nedjat-Haiem, Matthew; Cooke, James E. – Cogent Education, 2021
Assessments are common in undergraduate classrooms, with formats including multiple-choice and open-ended (in which the students must generate their own answers) questions. While much is known about the strategies that students use when taking multiple-choice questions, there has yet to be a study evaluating the strategies that students employ…
Descriptors: Test Wiseness, Test Items, Undergraduate Students, Student Evaluation
Xiao, Yue; He, Qiwei; Veldkamp, Bernard; Liu, Hongyun – Journal of Computer Assisted Learning, 2021
The response process of problem-solving items contains rich information about respondents' behaviours and cognitive process in the digital tasks, while the information extraction is a big challenge. The aim of the study is to use a data-driven approach to explore the latent states and state transitions underlying problem-solving process to reflect…
Descriptors: Problem Solving, Competence, Markov Processes, Test Wiseness
Gorgun, Guher; Bulut, Okan – Large-scale Assessments in Education, 2023
In low-stakes assessment settings, students' performance is not only influenced by students' ability level but also their test-taking engagement. In computerized adaptive tests (CATs), disengaged responses (e.g., rapid guesses) that fail to reflect students' true ability levels may lead to the selection of less informative items and thereby…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Algorithms
Laura Laclede – ProQuest LLC, 2023
Because non-cognitive constructs can influence student success in education beyond academic achievement, it is essential that they are reliably conceptualized and measured. Within this context, there are several gaps in the literature related to correctly interpreting the meaning of scale scores when a non-standard response option like I do not…
Descriptors: High School Students, Test Wiseness, Models, Test Items
Thompson, Kathryn N. – ProQuest LLC, 2023
It is imperative to collect validity evidence prior to interpreting and using test scores. During the process of collecting validity evidence, test developers should consider whether test scores are contaminated by sources of extraneous information. This is referred to as construct irrelevant variance, or the "degree to which test scores are…
Descriptors: Test Wiseness, Test Items, Item Response Theory, Scores
Hanif Akhtar – International Society for Technology, Education, and Science, 2023
For efficiency, Computerized Adaptive Test (CAT) algorithm selects items with the maximum information, typically with a 50% probability of being answered correctly. However, examinees may not be satisfied if they only correctly answer 50% of the items. Researchers discovered that changing the item selection algorithms to choose easier items (i.e.,…
Descriptors: Success, Probability, Computer Assisted Testing, Adaptive Testing
Livingston, Samuel A. – Educational Testing Service, 2020
This booklet is a conceptual introduction to item response theory (IRT), which many large-scale testing programs use for constructing and scoring their tests. Although IRT is essentially mathematical, the approach here is nonmathematical, in order to serve as an introduction on the topic for people who want to understand why IRT is used and what…
Descriptors: Item Response Theory, Scoring, Test Items, Scaling