ERIC Number: ED663636
Record Type: Non-Journal
Publication Date: 2024-Sep-21
Pages: N/A
Abstractor: As Provided
ISBN: N/A
ISSN: N/A
EISSN: N/A
What Does It Mean to Meet Expectations? Comparing K-3 Interim Literacy Assessments to 3rd Grade State Literacy Assessments
Aaron Soo Ping Chow; Amanda Nabors
Society for Research on Educational Effectiveness
Background: Early-grade reading proficiency is well documented as in important factor for later academic success. Researchers and educators consider the achievement of reading proficiency by the end of grade three to be crucial to future academic success and financial independence (Hein et al., 2013; La Paro & Pianta, 2000; Singh, 2013). As part of the Colorado Reading to Ensure Academic Development (READ) Act passed in 2012, K-3 students are screened for "significant reading deficiencies" (see endnote) (SRD) through interim assessments several times a year. Also, in 2022, Massachusetts amended their state regulations to mandate that K-3 students be screened for literacy proficiency and areas of difficulty. In both states, students begin to take a state-level English language arts (ELA) assessment in grade three, the Colorado Measures of Academic Success (CMAS) and the Massachusetts Comprehensive Assessment System (MCAS). While many of these assessments are meant to measure similar literacy proficiencies, no two tests are identical--they may emphasize different ELA aspects in both measurement and scoring. Given this variation and their shared purpose to identify students with reading deficiencies there is a need to examine how their differences may translate to differences in students' success on state assessments. Purpose and Research Questions: scores used to identify students who may have an SRD with the CMAS scale. For the Massachusetts sample, we align the cut scores used to identify students at any level of risk of reading difficulties with the MCAS scale. We investigate the following: 1. How do literacy screening assessment cut scores from different assessments compare to each other when attempting to identify similar sets of students (i.e., those at risk or significant risk of reading difficulties)? a. How do the cut scores compare to each other after disaggregating by different aspects of identity? 2. How do screening assessment cut scores align with state-level ELA assessments? As the screening assessments aim to measure similar constructs, we hypothesize that the cut scores would be relatively similar across assessments, but with some variation. We also hypothesize that the assessments may fail to identify many students in need of assistance to "meet expectations" on the state assessments, likely due to the differing content between the screening assessments, which typically emphasize foundational skills, and the state ELA assessments, which measure a broader set of skills. Setting and Subjects: This study uses grade three data from Colorado and Massachusetts. The Colorado dataset consists of 351,268 students between 2014-15 and 2021-22 who took one of six screening assessments and the CMAS ELA exam in the spring of grade three. The Massachusetts dataset consists of 4,066 students who took one of seven screening assessments and the MCAS ELA exam in the spring of 2021-22. Both datasets primarily consist of White and Hispanic students. About half are classified as economically disadvantaged; Massachusetts has a higher percentage of emerging bilingual students (21% vs. 16%) and students with disabilities (20% vs. 11%). Research Design and Analysis: Researchers received different data sources from the states, including demographic data and screening and state assessment data. Equipercentile linking (Kolen & Brennan, 2004) was used to link each of the screening assessments to a common external scale (the state assessments). This technique assumes two test scores from the same group of students can be considered equivalent when the scores on each test have the same percentile rank. This allows the researchers to place the cut scores onto the state assessment scales to compare the cut scores to each other and to the state assessment. Findings: Our results indicate that the cut scores used to classify students as having an SRD in Colorado appear to be relatively similar across assessments when mapped onto the CMAS ELA scale (Table1). Further, the mapped CMAS scores for emerging bilingual students and non-emerging bilingual students are roughly the same--the same results are found when comparing students based on their free or reduced-price lunch status. When examining the more general "at risk" cut scores using the Massachusetts data there is slightly more variation in the linked state assessment score between screening assessments, however, the cut scores are still similar in terms of their MCAS alignment (Table 2). In addition, the SRD cut scores in Colorado generally cluster around the "Partially Met Expectations" cut on CMAS (i.e., two categories below "Met Expectations"), meaning that most students classified as having an SRD would be in the lowest category on CMAS (Figure 1). The more general cut scores in Massachusetts link to scores that range from the middle of the MCAS "Partially Meeting Expectations" level to the beginning of "Meeting Expectations". Conclusions: This study focuses on comparing screening assessments, specifically how they align with state-level assessments and how they might differ in which students they identify. The results show that despite their differences, these assessments identify students with similar types of performance on state assessments, regardless of whether one is looking at general risk (as in Massachusetts) or significant risk (as in Colorado). The results also explain why such large percentages of students not identified as being at risk or at significant risk do not meet expectations on state assessments. One possible explanation is that the state assessments require higher levels of ELA skills than these literacy assessments. Therefore, if the end goal of using such assessments is to ensure that all students reach grade-level reading, educators should consider whether additional supports can be provided to students "meeting expectations" on these assessments to ensure that all students receive sufficient help to achieve this goal. Note: "Significant reading deficiency" is a term defined by the Colorado legislature in the READ Act. The authors wish to acknowledge that this implies a deficit-mindset and brings to question who bears responsibility for and sets the parameters of a child's early academic pathways.
Descriptors: Elementary School Students, Primary Education, Grade 3, Emergent Literacy, Student Evaluation, Reading Tests, Reading Achievement, Standardized Tests, State Legislation, Educational Legislation, Screening Tests, Language Tests, Cutting Scores, Expectation
Society for Research on Educational Effectiveness. 2040 Sheridan Road, Evanston, IL 60208. Tel: 202-495-0920; e-mail: contact@sree.org; Web site: https://www.sree.org/
Publication Type: Reports - Research
Education Level: Elementary Education; Early Childhood Education; Primary Education; Grade 3
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: Society for Research on Educational Effectiveness (SREE)
Identifiers - Location: Colorado
Identifiers - Assessments and Surveys: Massachusetts Comprehensive Assessment System
Grant or Contract Numbers: N/A