NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Applied Measurement in…33
Audience
Location
Canada1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 33 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Clark, Amy K.; Nash, Brooke; Karvonen, Meagan – Applied Measurement in Education, 2022
Assessments scored with diagnostic models are increasingly popular because they provide fine-grained information about student achievement. Because of differences in how diagnostic assessments are scored and how results are used, the information teachers must know to interpret and use results may differ from concepts traditionally included in…
Descriptors: Elementary School Teachers, Secondary School Teachers, Assessment Literacy, Diagnostic Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Carney, Michele; Crawford, Angela; Siebert, Carl; Osguthorpe, Rich; Thiede, Keith – Applied Measurement in Education, 2019
The "Standards for Educational and Psychological Testing" recommend an argument-based approach to validation that involves a clear statement of the intended interpretation and use of test scores, the identification of the underlying assumptions and inferences in that statement--termed the interpretation/use argument, and gathering of…
Descriptors: Inquiry, Test Interpretation, Validity, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Davis, Susan L.; Buckendahl, Chad W. – Applied Measurement in Education, 2009
In response to a Congressional mandate, an evaluation of the National Assessment of Educational Progress (NAEP) was undertaken beginning in 2004. The evaluation design included a series of studies that encompassed the breadth and selected areas of depth of the NAEP program. Studies were identified with input from key stakeholders and were…
Descriptors: National Competency Tests, Evaluation Methods, Evaluation Criteria, Test Results
Peer reviewed Peer reviewed
Direct linkDirect link
Lane, Suzanne; Zumbo, Bruno D.; Abedi, Jamal; Benson, Jeri; Dossey, John; Elliott, Stephen N.; Kane, Michael; Linn, Robert; Paredes-Ziker, Cindy; Rodriguez, Michael; Schraw, Gregg; Slattery, Jean; Thomas, Veronica; Willhoft, Joe – Applied Measurement in Education, 2009
Given the changing landscape of educational accountability at the local, state, and national levels, and the changes in the uses of the National Assessment of Educational Progress (NAEP), including the evolving uses of NAEP as a policy tool to interpret state assessment and accountability systems, an explicit statement of the current and potential…
Descriptors: National Competency Tests, Academic Achievement, Accountability, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Zenisky, April L.; Hambleton, Ronald K.; Sireci, Stephen G. – Applied Measurement in Education, 2009
How a testing agency approaches score reporting can have a significant impact on the perception of that assessment and the usefulness of the information among intended users and stakeholders. Too often, important decisions about reporting test data are left to the end of the test development cycle, but by considering the audience(s) and the kinds…
Descriptors: National Competency Tests, Scores, Test Results, Information Dissemination
Peer reviewed Peer reviewed
Direct linkDirect link
Noell, Jay; Ginsburg, Alan – Applied Measurement in Education, 2009
The report, "Evaluation of the National Assessment of Educational Progress", provides a number of recommendations for addressing validity concerns about NAEP. This article identifies actions that could be taken by the Congress, the National Center for Education Statistics, and the National Assessment Governing Board--which share responsibility for…
Descriptors: National Competency Tests, Federal Government, Public Agencies, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Jodoin, Michael G.; Zenisky, April; Hambleton, Ronald K. – Applied Measurement in Education, 2006
Many credentialing agencies today are either administering their examinations by computer or are likely to be doing so in the coming years. Unfortunately, although several promising computer-based test designs are available, little is known about how well they function in examination settings. The goal of this study was to compare fixed-length…
Descriptors: Computers, Test Results, Psychometrics, Computer Simulation
Peer reviewed Peer reviewed
Wolfe, Edward W.; Gitomer, Drew H. – Applied Measurement in Education, 2001
Attempted to improve the measurement quality of a complex performance assessment through principled assessment design using the example of the National Board for Professional Teaching Standards Early Childhood/Generalist examination. All indexes examined improved after revisions were made. Results show the importance of attention to assessment…
Descriptors: Change, Performance Based Assessment, Psychometrics, Scores
Peer reviewed Peer reviewed
Mislevy, Robert J.; Steinberg, Linda S.; Breyer, F. Jay; Almond, Russell G.; Johnson, Lynn – Applied Measurement in Education, 2002
Presents a design framework that incorporates integrated structures for modeling knowledge and skills, designing tasks, and extracting and synthesizing evidence. Illustrates these ideas in the context of a project that assesses problem solving in dental hygiene through computer-based simulations. (SLD)
Descriptors: Computer Simulation, Dental Hygienists, Educational Assessment, Evaluation Utilization
Peer reviewed Peer reviewed
Ercikan, Kadriye – Applied Measurement in Education, 1997
Linking scores from the National Assessment of Educational Progress (NAEP) to statewide test results was studied. Results based on an equipercentile procedure suggest that such a link does not provide precise information. Information from a linking study should be limited to rough estimates of students in each NAEP achievement level. (SLD)
Descriptors: Equated Scores, Estimation (Mathematics), National Surveys, State Programs
Peer reviewed Peer reviewed
Meijer, Rob R. – Applied Measurement in Education, 1996
This special issue is devoted to person-fit analysis, which is also referred to as appropriateness measurement. An introduction to person-fit research is given. Several types of aberrant response behavior on a test are discussed; and whether person-fit statistics can be used to detect dominant score patterns is explored. (SLD)
Descriptors: Identification, Item Response Theory, Research Methodology, Responses
Peer reviewed Peer reviewed
Bishop, N. Scott; Frisbie, David A. – Applied Measurement in Education, 1999
Studied the effects of overlapping some test items across consecutive test levels by using overlapping and nonoverlapping items with 834 prematched and 782 matched elementary school students and focusing on whether there is an effect on achievement test scores due to item familiarization. No effects were detected. (SLD)
Descriptors: Achievement Tests, Elementary Education, Elementary School Students, Scores
Peer reviewed Peer reviewed
Reise, Steven P.; Flannery, Wm. Peter – Applied Measurement in Education, 1996
Statistical and theoretical issues that arise from assessing person-fit on measures of typical performance are discussed, including the frequent attenuation of detection of person-misfit, the need for methods of identifying sources of response aberrancy, and person-fit measures as moderators of trait-criterion relations. (SLD)
Descriptors: Item Response Theory, Measurement Techniques, Performance, Responses
Peer reviewed Peer reviewed
Direct linkDirect link
Goodman, Dean P.; Hambleton, Ronald K. – Applied Measurement in Education, 2004
A critical, but often neglected, component of any large-scale assessment program is the reporting of test results. In the past decade, a body of evidence has been compiled that raises concerns over the ways in which these results are reported to and understood by their intended audiences. In this study, current approaches for reporting…
Descriptors: Test Results, Student Evaluation, Scores, Testing Programs
Peer reviewed Peer reviewed
Linn, Robert L. – Applied Measurement in Education, 1998
The validity of interpretations of National Assessment of Educational Progress (NAEP) achievement levels is evaluated by focusing on evidence regarding three types of discrepancies: (1) between standards; (2) among descriptions of achievement levels; and (3) between assessments and content standards. All of these discrepancies raise serious…
Descriptors: Academic Achievement, Achievement Tests, Elementary Secondary Education, National Surveys
Previous Page | Next Page ยป
Pages: 1  |  2  |  3