NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Type
Reports - Descriptive12
Journal Articles11
Location
Netherlands1
Laws, Policies, & Programs
Assessments and Surveys
SAT (College Admission Test)1
What Works Clearinghouse Rating
Showing all 12 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
He, Qingping; Meadows, Michelle; Black, Beth – Research Papers in Education, 2022
A potential negative consequence of high-stakes testing is inappropriate test behaviour involving individuals and/or institutions. Inappropriate test behaviour and test collusion can result in aberrant response patterns and anomalous test scores and invalidate the intended interpretation and use of test results. A variety of statistical techniques…
Descriptors: Statistical Analysis, High Stakes Tests, Scores, Response Style (Tests)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Pelanek, Radek – Journal of Learning Analytics, 2021
In this work, we consider learning analytics for primary and secondary schools from the perspective of the designer of a learning system. We provide an overview of practically useful analytics techniques with descriptions of their applications and specific illustrations. We highlight data biases and caveats that complicate the analysis and its…
Descriptors: Learning Analytics, Elementary Schools, Secondary Schools, Educational Technology
Peer reviewed Peer reviewed
Direct linkDirect link
Krzic, Maja; Brown, Sandra – Natural Sciences Education, 2022
The transition of our large ([approximately]300 student) introductory soil science course to the online setting created several challenges, including engaging first- and second-year students, providing meaningful hands-on learning activities, and setting up online exams. The objective of this paper is to describe the development and use of…
Descriptors: Introductory Courses, Social Sciences, Online Courses, Educational Change
Peer reviewed Peer reviewed
Direct linkDirect link
Becker, Benjamin; van Rijn, Peter; Molenaar, Dylan; Debeer, Dries – Assessment & Evaluation in Higher Education, 2022
A common approach to increase test security in higher educational high-stakes testing is the use of different test forms with identical items but different item orders. The effects of such varied item orders are relatively well studied, but findings have generally been mixed. When multiple test forms with different item orders are used, we argue…
Descriptors: Information Security, High Stakes Tests, Computer Security, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Munoz, Albert; Mackay, Jonathon – Journal of University Teaching and Learning Practice, 2019
Online testing is a popular practice for tertiary educators, largely owing to efficiency in automation, scalability, and capability to add depth and breadth to subject offerings. As with all assessments, designs need to consider whether student cheating may be inadvertently made easier and more difficult to detect. Cheating can jeopardise the…
Descriptors: Cheating, Test Construction, Computer Assisted Testing, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
Schaffhauser, Dian – T.H.E. Journal, 2012
Tony Alpert, chief operating officer for the Smarter Balanced Assessment Consortium (SBAC), ponders whether to allow tablet computers--and particularly iPads--to be used for summative testing online. As Alpert points out, not only would student cheating compromise the validity of the individual student's test event, "worse yet, it could expose…
Descriptors: Cheating, Test Validity, Test Construction, Consortia
Peer reviewed Peer reviewed
Direct linkDirect link
Tendeiro, Jorge N.; Meijer, Rob R. – Applied Psychological Measurement, 2012
This article extends the work by Armstrong and Shi on CUmulative SUM (CUSUM) person-fit methodology. The authors present new theoretical considerations concerning the use of CUSUM person-fit statistics based on likelihood ratios for the purpose of detecting cheating and random guessing by individual test takers. According to the Neyman-Pearson…
Descriptors: Cheating, Individual Testing, Adaptive Testing, Statistics
National Council on Measurement in Education, 2012
Testing and data integrity on statewide assessments is defined as the establishment of a comprehensive set of policies and procedures for: (1) the proper preparation of students; (2) the management and administration of the test(s) that will lead to accurate and appropriate reporting of assessment results; and (3) maintaining the security of…
Descriptors: State Programs, Integrity, Testing, Test Preparation
Young, Jeffrey R. – Chronicle of Higher Education, 2008
Several Web sites have emerged in recent years that encourage students to upload old exams to build a bank of test questions and answers that can be consulted by other students. This article reports that some professors have raised concerns about these sites, arguing that these could be used to cheat, especially if professors reuse old tests.…
Descriptors: Web Sites, Test Items, Ethics, Cheating
Peer reviewed Peer reviewed
Direct linkDirect link
van der Linden, Wim J.; Sotaridona, Leonardo – Journal of Educational and Behavioral Statistics, 2006
A statistical test for detecting answer copying on multiple-choice items is presented. The test is based on the exact null distribution of the number of random matches between two test takers under the assumption that the response process follows a known response model. The null distribution can easily be generalized to the family of distributions…
Descriptors: Test Items, Multiple Choice Tests, Cheating, Responses
Peer reviewed Peer reviewed
Wollack, James A. – Applied Psychological Measurement, 1997
Introduces a new Item Response Theory (IRT) based statistic for detecting answer copying. Compares this omega statistic with the best classical test theory-based statistic under various conditions, and finds omega superior based on Type I error rate and power. (SLD)
Descriptors: Cheating, Identification, Item Response Theory, Power (Statistics)
Rigol, Gretchen W. – College Board Review, 1991
The College Entrance Examination Board has not permitted calculator use on the Scholastic Aptitude Test because of unresolved concerns about equity, implications for test content, and logistical and security issues. Those issues no longer seem insurmountable, and significant changes are being introduced on many tests. (MSE)
Descriptors: Calculators, Cheating, College Entrance Examinations, Higher Education