Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 6 |
Since 2016 (last 10 years) | 9 |
Since 2006 (last 20 years) | 13 |
Descriptor
Response Style (Tests) | 21 |
Scores | 21 |
Test Items | 21 |
Difficulty Level | 7 |
Higher Education | 7 |
Achievement Tests | 6 |
Multiple Choice Tests | 6 |
Test Reliability | 6 |
Test Format | 5 |
Foreign Countries | 4 |
Item Response Theory | 4 |
More ▼ |
Source
Author
Ames, Allison J. | 1 |
Black, Beth | 1 |
Braeken, Johan | 1 |
Bulut, Hatice Cigdem | 1 |
Bulut, Okan | 1 |
Bynum, Bethany H. | 1 |
Bürkner, Paul-Christian | 1 |
Chace, Wendy | 1 |
Chissom, Brad | 1 |
Chukabarah, Prince C. O. | 1 |
Cormier, Damien C. | 1 |
More ▼ |
Publication Type
Reports - Research | 18 |
Journal Articles | 15 |
Speeches/Meeting Papers | 2 |
Dissertations/Theses -… | 1 |
Numerical/Quantitative Data | 1 |
Reports - Descriptive | 1 |
Reports - Evaluative | 1 |
Tests/Questionnaires | 1 |
Education Level
Middle Schools | 4 |
Elementary Education | 3 |
Intermediate Grades | 3 |
Grade 4 | 2 |
Grade 5 | 2 |
Grade 6 | 2 |
Primary Education | 2 |
Secondary Education | 2 |
Early Childhood Education | 1 |
Elementary Secondary Education | 1 |
Grade 3 | 1 |
More ▼ |
Audience
Researchers | 1 |
Laws, Policies, & Programs
Assessments and Surveys
ACT Assessment | 1 |
NEO Personality Inventory | 1 |
Program for International… | 1 |
Progress in International… | 1 |
Trends in International… | 1 |
What Works Clearinghouse Rating
He, Qingping; Meadows, Michelle; Black, Beth – Research Papers in Education, 2022
A potential negative consequence of high-stakes testing is inappropriate test behaviour involving individuals and/or institutions. Inappropriate test behaviour and test collusion can result in aberrant response patterns and anomalous test scores and invalidate the intended interpretation and use of test results. A variety of statistical techniques…
Descriptors: Statistical Analysis, High Stakes Tests, Scores, Response Style (Tests)
Steven R. Hiner – ProQuest LLC, 2023
The purpose of this study was to determine if there were significant statistical differences between scores on constructed response and computer-scorable questions on an accelerated middle school math placement test in a large urban school district in Ohio, and to ensure that all students have an opportunity to take the test. Five questions on a…
Descriptors: Scores, Middle Schools, Mathematics Tests, Placement Tests
Ivanova, Militsa; Michaelides, Michalis; Eklöf, Hanna – Educational Research and Evaluation, 2020
Collecting process data in computer-based assessments provides opportunities to describe examinee behaviour during a test-taking session. The number of actions taken by students while interacting with an item is in this context a variable that has been gaining attention. The present study aims to investigate how the number of actions performed on…
Descriptors: Foreign Countries, Secondary School Students, Achievement Tests, International Assessment
Steinmann, Isa; Sánchez, Daniel; van Laar, Saskia; Braeken, Johan – Assessment in Education: Principles, Policy & Practice, 2022
Questionnaire scales that are mixed-worded, i.e. include both positively and negatively worded items, often suffer from issues like low reliability and more complex latent structures than intended. Part of the problem might be that some responders fail to respond consistently to the mixed-worded items. We investigated the prevalence and impact of…
Descriptors: Response Style (Tests), Test Items, Achievement Tests, Foreign Countries
Bulut, Okan; Bulut, Hatice Cigdem; Cormier, Damien C.; Ilgun Dibek, Munevver; Sahin Kursad, Merve – Educational Assessment, 2023
Some statewide testing programs allow students to receive corrective feedback and revise their answers during testing. Despite its pedagogical benefits, the effects of providing revision opportunities remain unknown in the context of alternate assessments. Therefore, this study examined student data from a large-scale alternate assessment that…
Descriptors: Error Correction, Alternative Assessment, Feedback (Response), Multiple Choice Tests
Ames, Allison J. – Educational and Psychological Measurement, 2022
Individual response style behaviors, unrelated to the latent trait of interest, may influence responses to ordinal survey items. Response style can introduce bias in the total score with respect to the trait of interest, threatening valid interpretation of scores. Despite claims of response style stability across scales, there has been little…
Descriptors: Response Style (Tests), Individual Differences, Scores, Test Items
Bürkner, Paul-Christian; Schulte, Niklas; Holling, Heinz – Educational and Psychological Measurement, 2019
Forced-choice questionnaires have been proposed to avoid common response biases typically associated with rating scale questionnaires. To overcome ipsativity issues of trait scores obtained from classical scoring approaches of forced-choice items, advanced methods from item response theory (IRT) such as the Thurstonian IRT model have been…
Descriptors: Item Response Theory, Measurement Techniques, Questionnaires, Rating Scales
Lindner, Marlit A.; Schult, Johannes; Mayer, Richard E. – Journal of Educational Psychology, 2022
This classroom experiment investigates the effects of adding representational pictures to multiple-choice and constructed-response test items to understand the role of the response format for the multimedia effect in testing. Participants were 575 fifth- and sixth-graders who answered 28 science test items--seven items in each of four experimental…
Descriptors: Elementary School Students, Grade 5, Grade 6, Multimedia Materials
Kam, Chester Chun Seng – Educational and Psychological Measurement, 2016
To measure the response style of acquiescence, researchers recommend the use of at least 15 items with heterogeneous content. Such an approach is consistent with its theoretical definition and is a substantial improvement over traditional methods. Nevertheless, measurement of acquiescence can be enhanced by two additional considerations: first, to…
Descriptors: Test Items, Response Style (Tests), Test Content, Measurement
Thacker, Arthur A.; Dickinson, Emily R.; Bynum, Bethany H.; Wen, Yao; Smith, Erin; Sinclair, Andrea L.; Deatz, Richard C.; Wise, Lauress L. – Partnership for Assessment of Readiness for College and Careers, 2015
The Partnership for Assessment of Readiness for College and Careers (PARCC) field tests during the spring of 2014 provided an opportunity to investigate the quality of the items, tasks, and associated stimuli. HumRRO conducted several research studies summarized in this report. Quality of test items is integral to the "Theory of Action"…
Descriptors: Achievement Tests, Test Items, Common Core State Standards, Difficulty Level
Emons, Wilco H. M. – Applied Psychological Measurement, 2009
For valid decision making, it is essential to both the person being measured and the person or organization that is having the person measured that the observed scores adequately represent the underlying trait. This study deals with person-fit analysis of polytomous item scores to detect unusual patterns of sum scores on subsets of items. This…
Descriptors: Personality Theories, Personality Measures, Scores, Test Items
Test-Retest Reliability of a Theory of Mind Task Battery for Children with Autism Spectrum Disorders
Hutchins, Tiffany L.; Prelock, Patricia A.; Chace, Wendy – Focus on Autism and Other Developmental Disabilities, 2008
This study examined for the first time the test-retest reliability of theory-of-mind tasks when administered to children with Autism Spectrum Disorders (ASD). A total of 16 questions within 9 tasks targeting a range of content and complexity were administered at 2 times to 17 children with ASD. In all, 13 questions demonstrated adequate…
Descriptors: Autism, Response Style (Tests), Verbal Ability, Test Reliability

Searls, Donald T.; And Others – Journal of Experimental Education, 1990
Indices that detail aspects of student test responses include overall aberrancy; tendencies to miss relatively easy items; tendencies to correctly answer more difficult items; and a combination that indicates how the latter tendencies balance each other. Mathematics test results for 368 college students illustrate the indices. (SLD)
Descriptors: College Students, Computer Assisted Testing, Higher Education, Response Style (Tests)
Pomplun, Mark; Ritchie, Timothy; Custer, Michael – Educational Assessment, 2006
This study investigated factors related to score differences on computerized and paper-and-pencil versions of a series of primary K-3 reading tests. Factors studied included item and student characteristics. The results suggest that the score differences were more related to student than item characteristics. These student characteristics include…
Descriptors: Reading Tests, Student Characteristics, Response Style (Tests), Socioeconomic Status
Perrin, David W.; Kerasotes, Dean L. – 1979
It was hypothesized that using asterisks as attention focusing devices would cause students to read all asteriked test items more carefully and would improve test scores of undergraduate education students. Sixty-three undergraduates majoring in elementary or special education were administered a 36-item objective test. Asterisks were used to…
Descriptors: Difficulty Level, Higher Education, Objective Tests, Response Style (Tests)
Previous Page | Next Page »
Pages: 1 | 2