NotesFAQContact Us
Collection
Advanced
Search Tips
Education Level
Higher Education23
Postsecondary Education20
High Schools1
Secondary Education1
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 23 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Yavuz Akbulut – European Journal of Education, 2024
The testing effect refers to the gains in learning and retention that result from taking practice tests before the final test. Understanding the conditions under which practice tests improve learning is crucial, so four experiments were conducted with a total of 438 undergraduate students in Turkey. In the first study, students who took graded…
Descriptors: Foreign Countries, Undergraduate Students, Student Evaluation, Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Guo, Hongwen; Rios, Joseph A.; Ling, Guangming; Wang, Zhen; Gu, Lin; Yang, Zhitong; Liu, Lydia O. – ETS Research Report Series, 2022
Different variants of the selected-response (SR) item type have been developed for various reasons (i.e., simulating realistic situations, examining critical-thinking and/or problem-solving skills). Generally, the variants of SR item format are more complex than the traditional multiple-choice (MC) items, which may be more challenging to test…
Descriptors: Test Format, Test Wiseness, Test Items, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Steedle, Jeffrey T.; Cho, Young Woo; Wang, Shichao; Arthur, Ann M.; Li, Dongmei – Educational Measurement: Issues and Practice, 2022
As testing programs transition from paper to online testing, they must study mode comparability to support the exchangeability of scores from different testing modes. To that end, a series of three mode comparability studies was conducted during the 2019-2020 academic year with examinees randomly assigned to take the ACT college admissions exam on…
Descriptors: College Entrance Examinations, Computer Assisted Testing, Scores, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Sherman, Tyler J.; Harvey, Tanner M.; Royse, Emily A.; Heim, Ashley B.; Smith, Cara F.; Romano, Alicia B.; King, Aspen E.; Lyons, David O.; Holt, Emily A. – Journal of Biological Education, 2021
Formative assessments have been shown to improve student success; however, the format in which these assessments are implemented has not been well researched. In this study, individual Anatomy and Physiology lab sections were administered formative assessments composed of either a projected (i.e. 'shared-display') quiz presentation or a…
Descriptors: Test Format, Academic Achievement, Decision Making, Student Behavior
Peer reviewed Peer reviewed
Direct linkDirect link
Casandra Koevoets-Beach; Karen Julian; Morgan Balabanoff – Chemistry Education Research and Practice, 2023
Two-tiered assessment structures with paired content and confidence items are frequently used within chemistry assessments to stimulate and measure students' metacognition. The confidence judgment is designed to promote students' reflection on their application of content knowledge and can be characterized as calibrated or miscalibrated based on…
Descriptors: Chemistry, Self Concept, Student Attitudes, Mastery Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Sung, Rou-Jia; Swarat, Su L.; Lo, Stanley M. – Journal of Biological Education, 2022
Exams constitute the predominant form of summative assessment in undergraduate biology education, with the assumption that exam performance should reflect student conceptual understanding. Previous work highlights multiple examples in which students can answer exam problems correctly without the corresponding conceptual understanding. This…
Descriptors: Biology, Problem Solving, Undergraduate Students, Scientific Concepts
Peer reviewed Peer reviewed
Direct linkDirect link
Papenberg, Martin; Diedenhofen, Birk; Musch, Jochen – Journal of Experimental Education, 2021
Testwiseness may introduce construct-irrelevant variance to multiple-choice test scores. Presenting response options sequentially has been proposed as a potential solution to this problem. In an experimental validation, we determined the psychometric properties of a test based on the sequential presentation of response options. We created a strong…
Descriptors: Test Wiseness, Test Validity, Test Reliability, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Moon, Jung Aa; Keehner, Madeleine; Katz, Irvin R. – Educational Assessment, 2020
We investigated how item formats influence test takers' response tendencies under uncertainty. Adult participants solved content-equivalent math items in three formats: multiple-selection multiple-choice, grid with forced-choice (true-false) options, and grid with non-forced-choice options. Participants showed a greater tendency to commit (rather…
Descriptors: College Students, Test Wiseness, Test Format, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Nsor-Ambala, Randolph – Accounting Education, 2020
The study explored the application of closed-book, open-book and cheat-sheet exams in an undergraduate cost and management accounting course at a university in Ghana. 198 students participated in an exploratory study examining how the different exam types impact on exam scores, pre-exam anxiety and knowledge retention. The study improves on the…
Descriptors: Accounting, Teaching Methods, Undergraduate Students, Management Development
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Joseph, Dane Christian – Journal of Effective Teaching in Higher Education, 2019
Multiple-choice testing is a staple within the U.S. higher education system. From classroom assessments to standardized entrance exams such as the GRE, GMAT, or LSAT, test developers utilize a variety of validated and heuristic driven item-writing guidelines. One such guideline that has been given recent attention is to randomize the position of…
Descriptors: Test Construction, Multiple Choice Tests, Guessing (Tests), Test Wiseness
Peer reviewed Peer reviewed
Direct linkDirect link
Brassil, Chad E.; Couch, Brian A. – International Journal of STEM Education, 2019
Background: Within undergraduate science courses, instructors often assess student thinking using closed-ended question formats, such as multiple-choice (MC) and multiple-true-false (MTF), where students provide answers with respect to predetermined response options. While MC and MTF questions both consist of a question stem followed by a series…
Descriptors: Multiple Choice Tests, Objective Tests, Student Evaluation, Thinking Skills
Peer reviewed Peer reviewed
Direct linkDirect link
Kiss, Hubert János; Selei, Adrienn – Education Economics, 2018
Success in life is determined to a large extent by school performance, which in turn depends heavily on grades obtained in exams. In this study, we investigate a particular type of exam: multiple-choice tests. More concretely, we study if patterns of correct answers in multiple-choice tests affect performance. We design an experiment to study if…
Descriptors: Multiple Choice Tests, Control Groups, Experimental Groups, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Fukuzawa, Sherry; deBraga, Michael – Journal of Curriculum and Teaching, 2019
Graded Response Method (GRM) is an alternative to multiple-choice testing where students rank options according to their relevance to the question. GRM requires discrimination and inference between statements and is a cost-effective critical thinking assessment in large courses where open-ended answers are not feasible. This study examined…
Descriptors: Alternative Assessment, Multiple Choice Tests, Test Items, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Kiat, John Emmanuel; Ong, Ai Rene; Ganesan, Asha – Educational Psychology, 2018
Multiple-choice questions (MCQs) play a key role in standardised testing and in-class assessment. Research into the influence of within-item response order on MCQ characteristics has been mixed. While some researchers have shown preferential selection of response options presented earlier in the answer list, others have failed to replicate these…
Descriptors: Undergraduate Students, Multiple Choice Tests, Attention Control, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Papanastasiou, Elena C. – Practical Assessment, Research & Evaluation, 2015
If good measurement depends in part on the estimation of accurate item characteristics, it is essential that test developers become aware of discrepancies that may exist on the item parameters before and after item review. The purpose of this study was to examine the answer changing patterns of students while taking paper-and-pencil multiple…
Descriptors: Psychometrics, Difficulty Level, Test Items, Multiple Choice Tests
Previous Page | Next Page »
Pages: 1  |  2