Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 9 |
Since 2016 (last 10 years) | 28 |
Since 2006 (last 20 years) | 40 |
Descriptor
Cheating | 45 |
Item Response Theory | 45 |
Test Items | 18 |
Identification | 15 |
Scores | 12 |
Statistical Analysis | 11 |
Item Analysis | 9 |
Multiple Choice Tests | 9 |
Foreign Countries | 8 |
High Stakes Tests | 8 |
Testing | 8 |
More ▼ |
Source
Author
Publication Type
Journal Articles | 33 |
Reports - Research | 28 |
Reports - Evaluative | 8 |
Reports - Descriptive | 5 |
Dissertations/Theses -… | 3 |
Numerical/Quantitative Data | 3 |
Speeches/Meeting Papers | 2 |
Collected Works - Proceedings | 1 |
Education Level
Higher Education | 9 |
Elementary Education | 5 |
Postsecondary Education | 5 |
Grade 4 | 4 |
Intermediate Grades | 4 |
Grade 8 | 3 |
High Schools | 3 |
Junior High Schools | 3 |
Middle Schools | 3 |
Secondary Education | 3 |
Early Childhood Education | 2 |
More ▼ |
Audience
Location
Netherlands | 3 |
Brazil | 1 |
Canada | 1 |
Nigeria | 1 |
Turkey | 1 |
Laws, Policies, & Programs
Assessments and Surveys
National Assessment of… | 1 |
Progress in International… | 1 |
What Works Clearinghouse Rating
Lang, Joseph B. – Journal of Educational and Behavioral Statistics, 2023
This article is concerned with the statistical detection of copying on multiple-choice exams. As an alternative to existing permutation- and model-based copy-detection approaches, a simple randomization p-value (RP) test is proposed. The RP test, which is based on an intuitive match-score statistic, makes no assumptions about the distribution of…
Descriptors: Identification, Cheating, Multiple Choice Tests, Item Response Theory
Kaiwen Man – Educational and Psychological Measurement, 2024
In various fields, including college admission, medical board certifications, and military recruitment, high-stakes decisions are frequently made based on scores obtained from large-scale assessments. These decisions necessitate precise and reliable scores that enable valid inferences to be drawn about test-takers. However, the ability of such…
Descriptors: Prior Learning, Testing, Behavior, Artificial Intelligence
Liu, Jinghua; Becker, Kirk – Journal of Educational Measurement, 2022
For any testing programs that administer multiple forms across multiple years, maintaining score comparability via equating is essential. With continuous testing and high-stakes results, especially with less secure online administrations, testing programs must consider the potential for cheating on their exams. This study used empirical and…
Descriptors: Cheating, Item Response Theory, Scores, High Stakes Tests
He, Qingping; Meadows, Michelle; Black, Beth – Research Papers in Education, 2022
A potential negative consequence of high-stakes testing is inappropriate test behaviour involving individuals and/or institutions. Inappropriate test behaviour and test collusion can result in aberrant response patterns and anomalous test scores and invalidate the intended interpretation and use of test results. A variety of statistical techniques…
Descriptors: Statistical Analysis, High Stakes Tests, Scores, Response Style (Tests)
Ross, Linette P. – ProQuest LLC, 2022
One of the most serious forms of cheating occurs when examinees have item preknowledge and prior access to secure test material before taking an exam for the purpose of obtaining an inflated test score. Examinees that cheat and have prior knowledge of test content before testing may have an unfair advantage over examinees that do not cheat. Item…
Descriptors: Testing, Deception, Cheating, Identification
Man, Kaiwen; Harring, Jeffrey R. – Educational and Psychological Measurement, 2021
Many approaches have been proposed to jointly analyze item responses and response times to understand behavioral differences between normally and aberrantly behaved test-takers. Biometric information, such as data from eye trackers, can be used to better identify these deviant testing behaviors in addition to more conventional data types. Given…
Descriptors: Cheating, Item Response Theory, Reaction Time, Eye Movements
Zopluoglu, Cengiz – International Journal of Assessment Tools in Education, 2019
Unusual response similarity among test takers may occur in testing data and be an indicator of potential test fraud (e.g., examinees copy responses from other examinees, send text messages or pre-arranged signals among themselves for the correct response, item pre-knowledge). One index to measure the degree of similarity between two response…
Descriptors: Item Response Theory, Computation, Cheating, Measurement Techniques
Mor, Ezgi; Kula-Kartal, Seval – International Journal of Assessment Tools in Education, 2022
The dimensionality is one of the most investigated concepts in the psychological assessment, and there are many ways to determine the dimensionality of a measured construct. The Automated Item Selection Procedure (AISP) and the DETECT are non-parametric methods aiming to determine the factorial structure of a data set. In the current study,…
Descriptors: Psychological Evaluation, Nonparametric Statistics, Test Items, Item Analysis
Man, Kaiwen; Harring, Jeffrey R.; Sinharay, Sandip – Journal of Educational Measurement, 2019
Data mining methods have drawn considerable attention across diverse scientific fields. However, few applications could be found in the areas of psychological and educational measurement, and particularly pertinent to this article, in test security research. In this study, various data mining methods for detecting cheating behaviors on large-scale…
Descriptors: Information Retrieval, Data Analysis, Identification, Tests
Zopluoglu, Cengiz – Educational and Psychological Measurement, 2019
Researchers frequently use machine-learning methods in many fields. In the area of detecting fraud in testing, there have been relatively few studies that have used these methods to identify potential testing fraud. In this study, a technical review of a recently developed state-of-the-art algorithm, Extreme Gradient Boosting (XGBoost), is…
Descriptors: Identification, Test Items, Deception, Cheating
Gorney, Kylie; Wollack, James A. – Practical Assessment, Research & Evaluation, 2022
Unlike the traditional multiple-choice (MC) format, the discrete-option multiple-choice (DOMC) format does not necessarily reveal all answer options to an examinee. The purpose of this study was to determine whether the reduced exposure of item content affects test security. We conducted an experiment in which participants were allowed to view…
Descriptors: Test Items, Test Format, Multiple Choice Tests, Item Analysis
Dimitrov, Dimiter M.; Atanasov, Dimitar V.; Luo, Yong – Measurement: Interdisciplinary Research and Perspectives, 2020
This study examines and compares four person-fit statistics (PFSs) in the framework of the "D"- scoring method (DSM): (a) van der Flier's "U3" statistic; (b) "Ud" statistic, as a modification of "U3" under the DSM; (c) "Zd" statistic, as a modification of the "Z3 (l[subscript z])"…
Descriptors: Goodness of Fit, Item Analysis, Item Response Theory, Scoring
Sinharay, Sandip; Jensen, Jens Ledet – Grantee Submission, 2018
In educational and psychological measurement, researchers and/or practitioners are often interested in examining whether the ability of an examinee is the same over two sets of items. Such problems can arise in measurement of change, detection of cheating on unproctored tests, erasure analysis, detection of item preknowledge etc. Traditional…
Descriptors: Test Items, Ability, Mathematics, Item Response Theory
Wang, Chun; Xu, Gongjun; Shang, Zhuoran; Kuncel, Nathan – Journal of Educational and Behavioral Statistics, 2018
The modern web-based technology greatly popularizes computer-administered testing, also known as online testing. When these online tests are administered continuously within a certain "testing window," many items are likely to be exposed and compromised, posing a type of test security concern. In addition, if the testing time is limited,…
Descriptors: Computer Assisted Testing, Cheating, Guessing (Tests), Item Response Theory
Sinharay, Sandip; Johnson, Matthew S. – Grantee Submission, 2019
According to Wollack and Schoenig (2018), benefitting from item preknowledge is one of the three broad types of test fraud that occur in educational assessments. We use tools from constrained statistical inference to suggest a new statistic that is based on item scores and response times and can be used to detect the examinees who may have…
Descriptors: Scores, Test Items, Reaction Time, Cheating