NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers9
Laws, Policies, & Programs
Elementary and Secondary…1
What Works Clearinghouse Rating
Showing 1 to 15 of 66 results Save | Export
Peer reviewed Peer reviewed
Andreea Dutulescu; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Assessing the difficulty of reading comprehension questions is crucial to educational methodologies and language understanding technologies. Traditional methods of assessing question difficulty rely frequently on human judgments or shallow metrics, often failing to accurately capture the intricate cognitive demands of answering a question. This…
Descriptors: Difficulty Level, Reading Tests, Test Items, Reading Comprehension
Peer reviewed Peer reviewed
Direct linkDirect link
Guerreiro, Meg A.; Barker, Elizabeth; Johnson, Janice Lee – AERA Online Paper Repository, 2020
This paper aims to explore the incorporation of embedding items within reading passages as an effort to improve assessment equity, student experience and performance, and engagement within a universal design framework. Reading comprehension items placed within text rather than at the end may remove measurement of confounding constructs such as…
Descriptors: Reading Comprehension, Grade 3, Elementary School Students, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Steinmann, Isa; Braeken, Johan; Strietholt, Rolf – AERA Online Paper Repository, 2021
This study investigates consistent and inconsistent respondents to mixed-worded questionnaire scales in large-scale assessments. Mixed-worded scales contain both positively and negatively worded items and are universally applied in different survey and content areas. Due to the changing wording, these scales require a more careful reading and…
Descriptors: Questionnaires, Measurement, Test Items, Response Style (Tests)
Hildenbrand, Lena; Wiley, Jennifer – Grantee Submission, 2021
Many studies have demonstrated that testing students on to-be-learned materials can be an effective learning activity. However, past studies have also shown that some practice test formats are more effective than others. Open-ended recall or short answer practice tests may be effective because the questions prompt deeper processing as students…
Descriptors: Test Format, Outcomes of Education, Cognitive Processes, Learning Activities
Peer reviewed Peer reviewed
Direct linkDirect link
Debeer, Dries; Janssen, Rianne – AERA Online Paper Repository, 2016
In educational assessments two types of missing responses can be discerned: items can be "not reached" or "skipped". Both types of omissions may be related to the test taker's proficiency, resulting in non-ignorable missingness. This paper proposes to model not reached and skipped items as part of the response process, using…
Descriptors: International Assessment, Foreign Countries, Achievement Tests, Secondary School Students
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Olney, Andrew M.; Pavlik, Philip I., Jr.; Maass, Jaclyn K. – Grantee Submission, 2017
This study investigated the effect of cloze item practice on reading comprehension, where cloze items were either created by humans, by machine using natural language processing techniques, or randomly. Participants from Amazon Mechanical Turk (N = 302) took a pre-test, read a text, and took part in one of five conditions, Do-Nothing, Re-Read,…
Descriptors: Reading Improvement, Reading Comprehension, Prior Learning, Cloze Procedure
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Carlson, Sarah E.; Seipel, Ben; Biancarosa, Gina; Davison, Mark L.; Clinton, Virginia – Grantee Submission, 2019
This demonstration introduces and presents an innovative online cognitive diagnostic assessment, developed to identify the types of cognitive processes that readers use during comprehension; specifically, processes that distinguish between subtypes of struggling comprehenders. Cognitive diagnostic assessments are designed to provide valuable…
Descriptors: Reading Comprehension, Standardized Tests, Diagnostic Tests, Computer Assisted Testing
He, Wei; Li, Feifei; Wolfe, Edward W.; Mao, Xia – Online Submission, 2012
For those tests solely composed of testlets, local item independency assumption tends to be violated. This study, by using empirical data from a large-scale state assessment program, was interested in investigates the effects of using different models on equating results under the non-equivalent group anchor-test (NEAT) design. Specifically, the…
Descriptors: Test Items, Equated Scores, Models, Item Response Theory
Shin, Chingwei David; Chien, Yuehmei; Way, Walter Denny – Pearson, 2012
Content balancing is one of the most important components in the computerized adaptive testing (CAT) especially in the K to 12 large scale tests that complex constraint structure is required to cover a broad spectrum of content. The purpose of this study is to compare the weighted penalty model (WPM) and the weighted deviation method (WDM) under…
Descriptors: Computer Assisted Testing, Elementary Secondary Education, Test Content, Models
Powers, Sonya; Turhan, Ahmet; Binici, Salih – Pearson, 2012
The population sensitivity of vertical scaling results was evaluated for a state reading assessment spanning grades 3-10 and a state mathematics test spanning grades 3-8. Subpopulations considered included males and females. The 3-parameter logistic model was used to calibrate math and reading items and a common item design was used to construct…
Descriptors: Scaling, Equated Scores, Standardized Tests, Reading Tests
Meyers, Jason L.; Murphy, Stephen; Goodman, Joshua; Turhan, Ahmet – Pearson, 2012
Operational testing programs employing item response theory (IRT) applications benefit from of the property of item parameter invariance whereby item parameter estimates obtained from one sample can be applied to other samples (when the underlying assumptions are satisfied). In theory, this feature allows for applications such as computer-adaptive…
Descriptors: Equated Scores, Test Items, Test Format, Item Response Theory
Peer reviewed Peer reviewed
Katz, Stuart; Lautenschlager, Gary J. – Educational Assessment, 2001
Conducted a regression analysis to assess the contributions of passage and no-passage factors to item variance on the Scholastic Aptitude Test reading comprehension task. Results show that no-passage factors play a larger role than do passage factors, accounting for as much as three-fourths of systematic variance in item difficulty and more than…
Descriptors: Reading Comprehension, Reading Tests, Regression (Statistics), Test Items
Thompson, Tony D.; Davey, Tim – 2000
This paper applies specific information item selection using a method developed by T. Davey and M. Fan (2000) to a multiple-choice passage-based reading test that is being developed for computer administration. Data used to calibrate the multidimensional item parameters for the simulation study consisted of item responses from randomly equivalent…
Descriptors: Adaptive Testing, Computer Assisted Testing, Reading Tests, Selection
Wei, Xin; Shen, Xuejun; Lukoff, Brian; Ho, Andrew Dean; Haertel, Edward H. – Online Submission, 2006
In 1998 and again in 2002, samples of eighth grade students in California were tested in reading as part of the state-level component of the National Assessment of Educational Progress (NAEP). In each of these years, all eighth graders in the state were also required to participate in the state's accountability testing, which included the reading…
Descriptors: Grade 8, Test Content, Reading Tests, Accountability
Lee, Yong-Won – 2000
This paper reports the results of an analysis of a reading comprehension test using the Q subscript 3 (Q3) statistics developed by W. Yen (1984). Yen's Q3 can be a useful tool for examining local item dependence in the context of a reading comprehension test in which a set of related items is followed by a reading passage. Q3 is basically a…
Descriptors: Factor Analysis, Foreign Countries, High School Students, High Schools
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5