Publication Date
In 2025 | 0 |
Since 2024 | 5 |
Since 2021 (last 5 years) | 25 |
Since 2016 (last 10 years) | 50 |
Since 2006 (last 20 years) | 70 |
Descriptor
Source
Author
Plake, Barbara S. | 4 |
Powers, Donald E. | 4 |
Katz, Irvin R. | 3 |
Keehner, Madeleine | 3 |
Moon, Jung Aa | 3 |
Rogers, W. Todd | 3 |
Bridgeman, Brent | 2 |
Crisp, Victoria | 2 |
Goldhammer, Frank | 2 |
Guo, Hongwen | 2 |
Huntley, Renee M. | 2 |
More ▼ |
Publication Type
Reports - Research | 121 |
Journal Articles | 96 |
Speeches/Meeting Papers | 14 |
Tests/Questionnaires | 8 |
Information Analyses | 2 |
Dissertations/Theses -… | 1 |
Numerical/Quantitative Data | 1 |
Education Level
Higher Education | 23 |
Postsecondary Education | 18 |
Secondary Education | 11 |
Elementary Education | 7 |
High Schools | 6 |
Junior High Schools | 6 |
Middle Schools | 6 |
Grade 8 | 3 |
Intermediate Grades | 3 |
Adult Education | 2 |
Grade 3 | 2 |
More ▼ |
Audience
Practitioners | 2 |
Researchers | 2 |
Teachers | 1 |
Location
Canada | 7 |
California | 2 |
Germany | 2 |
Malaysia | 2 |
Turkey | 2 |
United Kingdom | 2 |
United States | 2 |
Africa | 1 |
Bangladesh | 1 |
Canada (Ottawa) | 1 |
China | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Kuan-Yu Jin; Thomas Eckes – Educational and Psychological Measurement, 2024
Insufficient effort responding (IER) refers to a lack of effort when answering survey or questionnaire items. Such items typically offer more than two ordered response categories, with Likert-type scales as the most prominent example. The underlying assumption is that the successive categories reflect increasing levels of the latent variable…
Descriptors: Item Response Theory, Test Items, Test Wiseness, Surveys
Zhang, Susu; Li, Anqi; Wang, Shiyu – Educational Measurement: Issues and Practice, 2023
In computer-based tests allowing revision and reviews, examinees' sequence of visits and answer changes to questions can be recorded. The variable-length revision log data introduce new complexities to the collected data but, at the same time, provide additional information on examinees' test-taking behavior, which can inform test development and…
Descriptors: Computer Assisted Testing, Test Construction, Test Wiseness, Test Items
Carolyn Clarke – in education, 2024
This ethnographic case study, situated in Newfoundland and Labrador, Canada, examined the effects of full-scale provincial testing on families, its influences on homework, and familial accountability for teaching and learning. Data were drawn from family interviews, as well as letters and documents regarding homework. Teachers sensed a significant…
Descriptors: Academic Standards, Accountability, Testing, Homework
Scott P. Ardoin; Katherine S. Binder; Paulina A. Kulesz; Eloise Nimocks; Joshua A. Mellott – Grantee Submission, 2024
Understanding test-taking strategies (TTSs) and the variables that influence TTSs is crucial to understanding what reading comprehension tests measure. We examined how passage and student characteristics were associated with TTSs and their impact on response accuracy. Third (n = 78), fifth (n = 86), and eighth (n = 86) graders read and answered…
Descriptors: Test Wiseness, Eye Movements, Reading Comprehension, Reading Tests
Demirkaya, Onur; Bezirhan, Ummugul; Zhang, Jinming – Journal of Educational and Behavioral Statistics, 2023
Examinees with item preknowledge tend to obtain inflated test scores that undermine test score validity. With the availability of process data collected in computer-based assessments, the research on detecting item preknowledge has progressed on using both item scores and response times. Item revisit patterns of examinees can also be utilized as…
Descriptors: Test Items, Prior Learning, Knowledge Level, Reaction Time
Nedjat-Haiem, Matthew; Cooke, James E. – Cogent Education, 2021
Assessments are common in undergraduate classrooms, with formats including multiple-choice and open-ended (in which the students must generate their own answers) questions. While much is known about the strategies that students use when taking multiple-choice questions, there has yet to be a study evaluating the strategies that students employ…
Descriptors: Test Wiseness, Test Items, Undergraduate Students, Student Evaluation
Xiao, Yue; He, Qiwei; Veldkamp, Bernard; Liu, Hongyun – Journal of Computer Assisted Learning, 2021
The response process of problem-solving items contains rich information about respondents' behaviours and cognitive process in the digital tasks, while the information extraction is a big challenge. The aim of the study is to use a data-driven approach to explore the latent states and state transitions underlying problem-solving process to reflect…
Descriptors: Problem Solving, Competence, Markov Processes, Test Wiseness
Gorgun, Guher; Bulut, Okan – Large-scale Assessments in Education, 2023
In low-stakes assessment settings, students' performance is not only influenced by students' ability level but also their test-taking engagement. In computerized adaptive tests (CATs), disengaged responses (e.g., rapid guesses) that fail to reflect students' true ability levels may lead to the selection of less informative items and thereby…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Algorithms
Hanif Akhtar – International Society for Technology, Education, and Science, 2023
For efficiency, Computerized Adaptive Test (CAT) algorithm selects items with the maximum information, typically with a 50% probability of being answered correctly. However, examinees may not be satisfied if they only correctly answer 50% of the items. Researchers discovered that changing the item selection algorithms to choose easier items (i.e.,…
Descriptors: Success, Probability, Computer Assisted Testing, Adaptive Testing
Urrutia, Felipe; Araya, Roberto – Journal of Educational Computing Research, 2024
Written answers to open-ended questions can have a higher long-term effect on learning than multiple-choice questions. However, it is critical that teachers immediately review the answers, and ask to redo those that are incoherent. This can be a difficult task and can be time-consuming for teachers. A possible solution is to automate the detection…
Descriptors: Elementary School Students, Grade 4, Elementary School Mathematics, Mathematics Tests
Kho, Shermaine Qi En; Aryadoust, Vahid; Foo, Stacy – Education and Information Technologies, 2023
Studies have shown that test-takers tend to use keyword-matching strategies when taking listening tests. Keyword-matching involves matching content words in the written modality (test items) against those heard in the audio text. However, no research has investigated the effect of such keywords in listening tests, or the impact of gazing upon…
Descriptors: Eye Movements, Test Wiseness, Information Retrieval, Listening Comprehension Tests
Guo, Hongwen; Rios, Joseph A.; Ling, Guangming; Wang, Zhen; Gu, Lin; Yang, Zhitong; Liu, Lydia O. – ETS Research Report Series, 2022
Different variants of the selected-response (SR) item type have been developed for various reasons (i.e., simulating realistic situations, examining critical-thinking and/or problem-solving skills). Generally, the variants of SR item format are more complex than the traditional multiple-choice (MC) items, which may be more challenging to test…
Descriptors: Test Format, Test Wiseness, Test Items, Item Response Theory
DeCarlo, Lawrence T. – Journal of Educational Measurement, 2023
A conceptualization of multiple-choice exams in terms of signal detection theory (SDT) leads to simple measures of item difficulty and item discrimination that are closely related to, but also distinct from, those used in classical item analysis (CIA). The theory defines a "true split," depending on whether or not examinees know an item,…
Descriptors: Multiple Choice Tests, Test Items, Item Analysis, Test Wiseness
Liu, Yue; Cheng, Ying; Liu, Hongyun – Educational and Psychological Measurement, 2020
The responses of non-effortful test-takers may have serious consequences as non-effortful responses can impair model calibration and latent trait inferences. This article introduces a mixture model, using both response accuracy and response time information, to help differentiating non-effortful and effortful individuals, and to improve item…
Descriptors: Item Response Theory, Test Wiseness, Response Style (Tests), Reaction Time
Chen, Chia-Wen; Andersson, Björn; Zhu, Jinxin – Journal of Educational Measurement, 2023
The certainty of response index (CRI) measures respondents' confidence level when answering an item. In conjunction with the answers to the items, previous studies have used descriptive statistics and arbitrary thresholds to identify student knowledge profiles with the CRIs. Whereas this approach overlooked the measurement error of the observed…
Descriptors: Item Response Theory, Factor Analysis, Psychometrics, Test Items