Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 4 |
Since 2016 (last 10 years) | 5 |
Since 2006 (last 20 years) | 10 |
Descriptor
Cheating | 12 |
Test Items | 12 |
Computer Assisted Testing | 4 |
Difficulty Level | 4 |
Test Construction | 4 |
Ethics | 3 |
High Stakes Tests | 3 |
Item Response Theory | 3 |
Computer Security | 2 |
Educational Technology | 2 |
Identification | 2 |
More ▼ |
Source
Author
Becker, Benjamin | 1 |
Black, Beth | 1 |
Brown, Sandra | 1 |
Debeer, Dries | 1 |
He, Qingping | 1 |
Krzic, Maja | 1 |
Mackay, Jonathon | 1 |
Meadows, Michelle | 1 |
Meijer, Rob R. | 1 |
Molenaar, Dylan | 1 |
Munoz, Albert | 1 |
More ▼ |
Publication Type
Reports - Descriptive | 12 |
Journal Articles | 11 |
Education Level
Higher Education | 3 |
Postsecondary Education | 2 |
Elementary Education | 1 |
Elementary Secondary Education | 1 |
Secondary Education | 1 |
Audience
Practitioners | 1 |
Teachers | 1 |
Location
Netherlands | 1 |
Laws, Policies, & Programs
Assessments and Surveys
SAT (College Admission Test) | 1 |
What Works Clearinghouse Rating
He, Qingping; Meadows, Michelle; Black, Beth – Research Papers in Education, 2022
A potential negative consequence of high-stakes testing is inappropriate test behaviour involving individuals and/or institutions. Inappropriate test behaviour and test collusion can result in aberrant response patterns and anomalous test scores and invalidate the intended interpretation and use of test results. A variety of statistical techniques…
Descriptors: Statistical Analysis, High Stakes Tests, Scores, Response Style (Tests)
Pelanek, Radek – Journal of Learning Analytics, 2021
In this work, we consider learning analytics for primary and secondary schools from the perspective of the designer of a learning system. We provide an overview of practically useful analytics techniques with descriptions of their applications and specific illustrations. We highlight data biases and caveats that complicate the analysis and its…
Descriptors: Learning Analytics, Elementary Schools, Secondary Schools, Educational Technology
Krzic, Maja; Brown, Sandra – Natural Sciences Education, 2022
The transition of our large ([approximately]300 student) introductory soil science course to the online setting created several challenges, including engaging first- and second-year students, providing meaningful hands-on learning activities, and setting up online exams. The objective of this paper is to describe the development and use of…
Descriptors: Introductory Courses, Social Sciences, Online Courses, Educational Change
Item Order and Speededness: Implications for Test Fairness in Higher Educational High-Stakes Testing
Becker, Benjamin; van Rijn, Peter; Molenaar, Dylan; Debeer, Dries – Assessment & Evaluation in Higher Education, 2022
A common approach to increase test security in higher educational high-stakes testing is the use of different test forms with identical items but different item orders. The effects of such varied item orders are relatively well studied, but findings have generally been mixed. When multiple test forms with different item orders are used, we argue…
Descriptors: Information Security, High Stakes Tests, Computer Security, Test Items
Munoz, Albert; Mackay, Jonathon – Journal of University Teaching and Learning Practice, 2019
Online testing is a popular practice for tertiary educators, largely owing to efficiency in automation, scalability, and capability to add depth and breadth to subject offerings. As with all assessments, designs need to consider whether student cheating may be inadvertently made easier and more difficult to detect. Cheating can jeopardise the…
Descriptors: Cheating, Test Construction, Computer Assisted Testing, Classification
Schaffhauser, Dian – T.H.E. Journal, 2012
Tony Alpert, chief operating officer for the Smarter Balanced Assessment Consortium (SBAC), ponders whether to allow tablet computers--and particularly iPads--to be used for summative testing online. As Alpert points out, not only would student cheating compromise the validity of the individual student's test event, "worse yet, it could expose…
Descriptors: Cheating, Test Validity, Test Construction, Consortia
Tendeiro, Jorge N.; Meijer, Rob R. – Applied Psychological Measurement, 2012
This article extends the work by Armstrong and Shi on CUmulative SUM (CUSUM) person-fit methodology. The authors present new theoretical considerations concerning the use of CUSUM person-fit statistics based on likelihood ratios for the purpose of detecting cheating and random guessing by individual test takers. According to the Neyman-Pearson…
Descriptors: Cheating, Individual Testing, Adaptive Testing, Statistics
National Council on Measurement in Education, 2012
Testing and data integrity on statewide assessments is defined as the establishment of a comprehensive set of policies and procedures for: (1) the proper preparation of students; (2) the management and administration of the test(s) that will lead to accurate and appropriate reporting of assessment results; and (3) maintaining the security of…
Descriptors: State Programs, Integrity, Testing, Test Preparation
Young, Jeffrey R. – Chronicle of Higher Education, 2008
Several Web sites have emerged in recent years that encourage students to upload old exams to build a bank of test questions and answers that can be consulted by other students. This article reports that some professors have raised concerns about these sites, arguing that these could be used to cheat, especially if professors reuse old tests.…
Descriptors: Web Sites, Test Items, Ethics, Cheating
van der Linden, Wim J.; Sotaridona, Leonardo – Journal of Educational and Behavioral Statistics, 2006
A statistical test for detecting answer copying on multiple-choice items is presented. The test is based on the exact null distribution of the number of random matches between two test takers under the assumption that the response process follows a known response model. The null distribution can easily be generalized to the family of distributions…
Descriptors: Test Items, Multiple Choice Tests, Cheating, Responses

Wollack, James A. – Applied Psychological Measurement, 1997
Introduces a new Item Response Theory (IRT) based statistic for detecting answer copying. Compares this omega statistic with the best classical test theory-based statistic under various conditions, and finds omega superior based on Type I error rate and power. (SLD)
Descriptors: Cheating, Identification, Item Response Theory, Power (Statistics)
Rigol, Gretchen W. – College Board Review, 1991
The College Entrance Examination Board has not permitted calculator use on the Scholastic Aptitude Test because of unresolved concerns about equity, implications for test content, and logistical and security issues. Those issues no longer seem insurmountable, and significant changes are being introduced on many tests. (MSE)
Descriptors: Calculators, Cheating, College Entrance Examinations, Higher Education