NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 74 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Hong, Maxwell; Rebouças, Daniella A.; Cheng, Ying – Journal of Educational Measurement, 2021
Response time has started to play an increasingly important role in educational and psychological testing, which prompts many response time models to be proposed in recent years. However, response time modeling can be adversely impacted by aberrant response behavior. For example, test speededness can cause response time to certain items to deviate…
Descriptors: Reaction Time, Models, Computation, Robustness (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
van der Linden, Wim J. – Journal of Educational and Behavioral Statistics, 2022
Two independent statistical tests of item compromise are presented, one based on the test takers' responses and the other on their response times (RTs) on the same items. The tests can be used to monitor an item in real time during online continuous testing but are also applicable as part of post hoc forensic analysis. The two test statistics are…
Descriptors: Test Items, Item Analysis, Item Response Theory, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Doval, Eduardo; Delicado, Pedro – Journal of Educational and Behavioral Statistics, 2020
We propose new methods for identifying and classifying aberrant response patterns (ARPs) by means of functional data analysis. These methods take the person response function (PRF) of an individual and compare it with the pattern that would correspond to a generic individual of the same ability according to the item-person response surface. ARPs…
Descriptors: Response Style (Tests), Data Analysis, Identification, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
Iannario, Maria; Manisera, Marica; Piccolo, Domenico; Zuccolotto, Paola – Sociological Methods & Research, 2020
In analyzing data from attitude surveys, it is common to consider the "don't know" responses as missing values. In this article, we present a statistical model commonly used for the analysis of responses/evaluations expressed on Likert scales and extended to take into account the presence of don't know responses. The main objective is to…
Descriptors: Response Style (Tests), Likert Scales, Statistical Analysis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Hill, Laura G. – International Journal of Behavioral Development, 2020
Retrospective pretests ask respondents to report after an intervention on their aptitudes, knowledge, or beliefs before the intervention. A primary reason to administer a retrospective pretest is that in some situations, program participants may over the course of an intervention revise or recalibrate their prior understanding of program content,…
Descriptors: Pretesting, Response Style (Tests), Bias, Testing Problems
OECD Publishing, 2019
Computer-based administration of large-scale assessments makes it possible to collect a rich set of information on test takers, through analysis of the log files recording interactions between the computer interface and the server. This report examines timing and engagement indicators from the Survey of Adult Skills, a product of the Programme for…
Descriptors: Adults, Surveys, International Assessment, Responses
OECD Publishing, 2019
Log files from computer-based assessment can help better understand respondents' behaviours and cognitive strategies. Analysis of timing information from Programme for the International Assessment of Adult Competencies (PIAAC) reveals large differences in the time participants take to answer assessment items, as well as large country differences…
Descriptors: Adults, Computer Assisted Testing, Test Items, Reaction Time
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Yuan; Hau, Kit-Tai – Educational and Psychological Measurement, 2020
In large-scale low-stake assessment such as the Programme for International Student Assessment (PISA), students may skip items (missingness) which are within their ability to complete. The detection and taking care of these noneffortful responses, as a measure of test-taking motivation, is an important issue in modern psychometric models.…
Descriptors: Response Style (Tests), Motivation, Test Items, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Halpin, Peter F. – Measurement: Interdisciplinary Research and Perspectives, 2017
The target paper, "Rethinking Traditional Methods of Survey Validation" (Andrew Maul), raises some interesting critical ideas, both old and new, about the validation of self-report surveys. As indicated by Dr. Maul, recent policy initiatives in the United States (e.g., ESSA) have led to a demand for assessments of…
Descriptors: Self Evaluation (Individuals), Evaluation Methods, Measurement Techniques, Response Style (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Nana; Bolt, Daniel M. – Educational and Psychological Measurement, 2021
This paper presents a mixture item response tree (IRTree) model for extreme response style. Unlike traditional applications of single IRTree models, a mixture approach provides a way of representing the mixture of respondents following different underlying response processes (between individuals), as well as the uncertainty present at the…
Descriptors: Item Response Theory, Response Style (Tests), Models, Test Items
Keslair, François – OECD Publishing, 2018
This paper explores the impact of test-taking conditions on the quality of the Programme for the International Assessment of Adult Competencies (PIAAC) assessment. Interviewers record information about the room of assessment and interruptions that occurred during each interview. These observations, along with information on interviewer assignment…
Descriptors: Interviews, Testing, Educational Quality, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Dongbo; Koda, Keiko – Asian-Pacific Journal of Second and Foreign Language Education, 2017
Word Associates Format (WAF) tests are often used to measure second language learners' vocabulary depth with a focus on their network knowledge. Yet, there were often many variations in the specific forms of the tests and the ways they were used, which tended to have an impact on learners' response behaviors and, more importantly, the psychometric…
Descriptors: Language Tests, Vocabulary Development, Second Language Learning, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Vispoel, Walter P.; Tao, Shuqin – Psychological Assessment, 2013
Our goal in this investigation was to evaluate the reliability of scores from the Balanced Inventory of Desirable Responding (BIDR) more comprehensively than in prior research using a generalizability-theory framework based on both dichotomous and polytomous scoring of items. Generalizability coefficients accounting for specific-factor, transient,…
Descriptors: Reliability, Scores, Measures (Individuals), Generalizability Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Tremblay, Pascale; Sato, Marc; Small, Steven L. – Neuropsychologia, 2012
Despite accumulating evidence that cortical motor areas, particularly the lateral premotor cortex, are activated during language comprehension, the question of whether motor processes help mediate the semantic encoding of language remains controversial. To address this issue, we examined whether low frequency (1 Hz) repetitive transcranial…
Descriptors: Priming, Evidence, Comprehension, Sentences
Peer reviewed Peer reviewed
Direct linkDirect link
Spooren, Pieter; Mortelmans, Dimitri; Thijssen, Peter – British Educational Research Journal, 2012
Structural equation modelling is used to measure the existence of a response style (in particular, acquiescence) behind three balanced Likert scales measuring different concepts in a questionnaire for student evaluation of teaching in higher education. Exploration with one sample (n = 1125) and confirmation in a second sample (n = 710) from a…
Descriptors: College Students, Student Evaluation of Teacher Performance, Response Style (Tests), Likert Scales
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5