Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 12 |
Since 2006 (last 20 years) | 24 |
Descriptor
Reading Tests | 25 |
Test Items | 25 |
Grade 3 | 24 |
Elementary School Students | 10 |
Grade 4 | 10 |
Grade 5 | 8 |
Reading Comprehension | 8 |
Achievement Tests | 7 |
Difficulty Level | 7 |
Grade 7 | 7 |
Grade 8 | 7 |
More ▼ |
Source
Author
Tindal, Gerald | 6 |
Liu, Kimy | 4 |
Abedi, Jamal | 2 |
Alonzo, Julie | 2 |
Kao, Jenny C. | 2 |
Katherine S. Binder | 2 |
Ketterlin-Geller, Leanne R. | 2 |
Leon, Seth | 2 |
Scott P. Ardoin | 2 |
Sundstrom-Hebert, Krystal | 2 |
Zebehazy, Kim T. | 2 |
More ▼ |
Publication Type
Reports - Research | 21 |
Journal Articles | 12 |
Numerical/Quantitative Data | 5 |
Tests/Questionnaires | 4 |
Reports - Evaluative | 3 |
Speeches/Meeting Papers | 2 |
Reports - Descriptive | 1 |
Education Level
Grade 3 | 25 |
Elementary Education | 21 |
Early Childhood Education | 12 |
Primary Education | 12 |
Grade 4 | 11 |
Grade 5 | 9 |
Junior High Schools | 9 |
Middle Schools | 9 |
Secondary Education | 9 |
Grade 7 | 7 |
Grade 8 | 7 |
More ▼ |
Audience
Location
South Africa | 3 |
Germany | 2 |
Pennsylvania | 2 |
Arkansas | 1 |
Australia | 1 |
Colorado | 1 |
District of Columbia | 1 |
Florida | 1 |
Illinois | 1 |
Maryland | 1 |
Massachusetts | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Nikola Ebenbeck; Markus Gebhardt – Journal of Special Education Technology, 2024
Technologies that enable individualization for students have significant potential in special education. Computerized Adaptive Testing (CAT) refers to digital assessments that automatically adjust their difficulty level based on students' abilities, allowing for personalized, efficient, and accurate measurement. This article examines whether CAT…
Descriptors: Computer Assisted Testing, Students with Disabilities, Special Education, Grade 3
Nese, Joseph F. T.; Kamata, Akihito – School Psychology, 2021
Curriculum-based measurement of oral reading fluency (CBM-R) is widely used across the United States as a strong indicator of comprehension and overall reading achievement, but has several limitations including errors in administration and large standard errors of measurement. The purpose of this study is to compare scoring methods and passage…
Descriptors: Curriculum Based Assessment, Oral Reading, Reading Fluency, Reading Tests
Guerreiro, Meg A.; Barker, Elizabeth; Johnson, Janice Lee – AERA Online Paper Repository, 2020
This paper aims to explore the incorporation of embedding items within reading passages as an effort to improve assessment equity, student experience and performance, and engagement within a universal design framework. Reading comprehension items placed within text rather than at the end may remove measurement of confounding constructs such as…
Descriptors: Reading Comprehension, Grade 3, Elementary School Students, Measurement Techniques
Corrin Moss; Scott P. Ardoin; Joshua A. Mellott; Katherine S. Binder – Grantee Submission, 2023
The current study investigated the impact of manipulating reading strategy, reading the questions first (QF) or the passage first (PF), during a reading comprehension test, and we explored how reading strategy was related to student characteristics. Participants' eye movements were monitored as they read 12 passages and answered multiple-choice…
Descriptors: Reading Processes, Accuracy, Grade 8, Reading Tests
Steinmann, Isa; Braeken, Johan; Strietholt, Rolf – AERA Online Paper Repository, 2021
This study investigates consistent and inconsistent respondents to mixed-worded questionnaire scales in large-scale assessments. Mixed-worded scales contain both positively and negatively worded items and are universally applied in different survey and content areas. Due to the changing wording, these scales require a more careful reading and…
Descriptors: Questionnaires, Measurement, Test Items, Response Style (Tests)
Kathryn A. Tremblay; Katherine S. Binder; Scott P. Ardoin; Armani Talwar; Elizabeth L. Tighe – Grantee Submission, 2021
Background: Of the myriad of reading comprehension (RC) assessments used in schools, multiple-choice (MC) questions continue to be one of the most prevalent formats used by educators and researchers. Outcomes from RC assessments dictate many critical factors encountered during a student's academic career, and it is crucial that we gain a deeper…
Descriptors: Reading Strategies, Eye Movements, Expository Writing, Grade 3
Li, Feifei – ETS Research Report Series, 2017
An information-correction method for testlet-based tests is introduced. This method takes advantage of both generalizability theory (GT) and item response theory (IRT). The measurement error for the examinee proficiency parameter is often underestimated when a unidimensional conditional-independence IRT model is specified for a testlet dataset. By…
Descriptors: Item Response Theory, Generalizability Theory, Tests, Error of Measurement
Li, Sylvia; Meyer, Patrick – NWEA, 2019
This simulation study examines the measurement precision, item exposure rates, and the depth of the MAP® Growth™ item pools under various grade-level restrictions. Unlike most summative assessments, MAP Growth allows examinees to see items from any grade level, regardless of the examinee's actual grade level. It does not limit the test to items…
Descriptors: Achievement Tests, Item Banks, Test Items, Instructional Program Divisions
Palane, Nelladee McLeod; Howie, Sarah – Perspectives in Education, 2019
In this article, preProgress in Reading Literacy Study (prePIRLS) 2011 data is used to compare the performance of different language of instruction groupings (English, Afrikaans and African languages) in primary schools on the more complex, higher-order reading comprehension items tested in a large-scale international test. PrePIRLS 2011…
Descriptors: Reading Comprehension, Language of Instruction, Models, Elementary School Students
Spaull, Nicholas – South African Journal of Childhood Education, 2016
The aim of this article is to exploit an unusual occurrence whereby a large group of South African grade 3 students were tested twice, 1 month apart, on the same test in different languages. Using a simplified difference-in-difference methodology, it becomes possible to identify the causal impact of writing a test in English when English is not a…
Descriptors: Foreign Countries, Grade 3, Literacy, Numeracy
Steedle, Jeffrey; McBride, Malena; Johnson, Marc; Keng, Leslie – Partnership for Assessment of Readiness for College and Careers, 2016
The first operational administration of the Partnership for Assessment of Readiness for College and Careers (PARCC) took place during the 2014-2015 school year. In addition to the traditional paper-and-pencil format, the assessments were available for administration on a variety of electronic devices, including desktop computers, laptop computers,…
Descriptors: Computer Assisted Testing, Difficulty Level, Test Items, Scores
Sáez, Leilani; Irvin, P. Shawn; Alonzo, Julie; Tindal, Gerald – Behavioral Research and Teaching, 2013
Five hundred and seventeen words from the easyCBM Word Reading assessment (n = 57 kindergarten, 117 first grade, 172 second grade, and 171 third grade) were examined by 15 teachers for their correspondence with the Common Core State Standards in English Language Arts. In particular, the degree of correspondence between Standard 3 ("Phonics…
Descriptors: Curriculum Based Assessment, Reading Tests, Alignment (Education), Academic Standards
Koo, Jin; Becker, Betsy Jane; Kim, Young-Suk – Language Testing, 2014
In this study, differential item functioning (DIF) trends were examined for English language learners (ELLs) versus non-ELL students in third and tenth grades on a large-scale reading assessment. To facilitate the analyses, a meta-analytic DIF technique was employed. The results revealed that items requiring knowledge of words and phrases in…
Descriptors: Test Bias, Reading Tests, English Language Learners, Native Speakers
Zebehazy, Kim T.; Zigmond, Naomi; Zimmerman, George J. – Journal of Visual Impairment & Blindness, 2012
Introduction: This study investigated differential item functioning (DIF) of test items on Pennsylvania's Alternate System of Assessment (PASA) for students with visual impairments and severe cognitive disabilities and what the reasons for the differences may be. Methods: The Wilcoxon signed ranks test was used to analyze differences in the scores…
Descriptors: Test Bias, Test Items, Alternative Assessment, Visual Impairments
Hoadley, Ursula; Muller, Johan – Curriculum Journal, 2016
Why has large-scale standardised testing attracted such a bad press? Why has pedagogic benefit to be derived from test results been downplayed? The paper investigates this question by first surveying the pros and cons of testing in the literature, and goes on to examine educators' responses to standardised, large-scale tests in a sample of low…
Descriptors: Foreign Countries, Standardized Tests, Developing Nations, Visual Discrimination
Previous Page | Next Page »
Pages: 1 | 2