NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 321 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Reimers, Jennifer; Turner, Ronna C.; Tendeiro, Jorge N.; Lo, Wen-Juo; Keiffer, Elizabeth – Measurement: Interdisciplinary Research and Perspectives, 2023
Person-fit analyses are commonly used to detect aberrant responding in self-report data. Nonparametric person fit statistics do not require fitting a parametric test theory model and have performed well compared to other person-fit statistics. However, detection of aberrant responding has primarily focused on dominance response data, thus the…
Descriptors: Goodness of Fit, Nonparametric Statistics, Error of Measurement, Comparative Analysis
Elizabeth Talbott; Andres De Los Reyes; Devin M. Kearns; Jeannette Mancilla-Martinez; Mo Wang – Exceptional Children, 2023
Evidence-based assessment (EBA) requires that investigators employ scientific theories and research findings to guide decisions about what domains to measure, how and when to measure them, and how to make decisions and interpret results. To implement EBA, investigators need high-quality assessment tools along with evidence-based processes. We…
Descriptors: Evidence Based Practice, Evaluation Methods, Special Education, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A.; Harrison, Michael – Measurement: Interdisciplinary Research and Perspectives, 2019
Utilizing the perspective of finite mixture modeling, this note considers whether a finding of a plausible one-parameter logistic model could be spurious for a population with substantial unobserved heterogeneity. A theoretically and empirically important setting is discussed involving the mixture of two latent classes, with the less restrictive…
Descriptors: Models, Evaluation Methods, Social Science Research, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Baniasadi, Ali; Salehi, Keyvan; Khodaie, Ebrahim; Bagheri Noaparast, Khosrow; Izanloo, Balal – Asia-Pacific Education Researcher, 2023
The aim of this study was to identify the components of fair classroom assessment and formulate a conceptual framework for it by following the steps proposed by Okoli and Schabram for a systematic review. For this purpose, three databases including ERIC, Elsevier, and Springer were systematically searched. As a result of this research, 39…
Descriptors: Ethics, Culture Fair Tests, Student Evaluation, Evaluation Methods
Andres De Los Reyes; Mo Wang; Matthew D. Lerner; Bridget A. Makol; Olivia M. Fitzpatrick; John R. Weisz – Grantee Submission, 2022
Researchers strategically assess youth mental health by soliciting reports from multiple informants. Typically, these informants (e.g., parents, teachers, youth themselves) vary in the social contexts where they observe youth. Decades of research reveal that the most common data conditions produced with this approach consist of discrepancies…
Descriptors: Mental Health, Measurement Techniques, Evaluation Methods, Research
Peer reviewed Peer reviewed
Direct linkDirect link
Park, Yeonggwang; Anand, Supraja; Ozmeral, Erol J.; Shrivastav, Rahul; Eddins, David A. – Journal of Speech, Language, and Hearing Research, 2022
Purpose: Vocal roughness is often present in many voice disorders but the assessment of roughness mainly depends on the subjective auditory-perceptual evaluation and lacks acoustic correlates. This study aimed to apply the concept of roughness in general sound quality perception to vocal roughness assessment and to characterize the relationship…
Descriptors: Voice Disorders, Evaluation Methods, Auditory Perception, Acoustics
Peer reviewed Peer reviewed
Direct linkDirect link
Robert Meyer; Sara Hu; Michael Christian – Society for Research on Educational Effectiveness, 2022
This paper develops models to measure growth in student achievement with a focus on the possibility of differential growth in achievement for low and high-achieving students. We consider a gap-closing model that evaluates the degree to which students in a target group -- students in the bottom quartile of measured achievement -- perform better…
Descriptors: Academic Achievement, Achievement Gap, Models, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Mitchell, Vincent – Higher Education Research and Development, 2019
Research impact features heavily in debates about 'the measured university' and is now formally assessed by governments in the UK and Australia. Yet clear guidance on how impact can be measured in non-monetary ways is often lacking because of confused thinking and the context-specific nature of outcomes. To help resolve this, we first propose a…
Descriptors: Evaluation Methods, Models, Business Schools, Cost Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Whittaker, Tiffany A.; Khojasteh, Jam – Journal of Experimental Education, 2017
Latent growth modeling (LGM) is a popular and flexible technique that may be used when data are collected across several different measurement occasions. Modeling the appropriate growth trajectory has important implications with respect to the accurate interpretation of parameter estimates of interest in a latent growth model that may impact…
Descriptors: Statistical Analysis, Monte Carlo Methods, Models, Structural Equation Models
Peer reviewed Peer reviewed
Direct linkDirect link
Raczynski, Kevin; Cohen, Allan – Applied Measurement in Education, 2018
The literature on Automated Essay Scoring (AES) systems has provided useful validation frameworks for any assessment that includes AES scoring. Furthermore, evidence for the scoring fidelity of AES systems is accumulating. Yet questions remain when appraising the scoring performance of AES systems. These questions include: (a) which essays are…
Descriptors: Essay Tests, Test Scoring Machines, Test Validity, Evaluators
Middleton, Joel A.; Scott, Marc A.; Diakow, Ronli; Hill, Jennifer L. – Grantee Submission, 2016
In the analysis of causal effects in non-experimental studies, conditioning on observable covariates is one way to try to reduce unobserved confounder bias. However, a developing literature has shown that conditioning on certain covariates may increase bias, and the mechanisms underlying this phenomenon have not been fully explored. We add to the…
Descriptors: Statistical Bias, Identification, Evaluation Methods, Measurement Techniques
Fazlul, Ishtiaque; Koedel, Cory; Parsons, Eric – National Center for Analysis of Longitudinal Data in Education Research (CALDER), 2022
Measures of student disadvantage--or risk--are critical components of equity-focused education policies. However, the risk measures used in contemporary policies have significant limitations, and despite continued advances in data infrastructure and analytic capacity, there has been little innovation in these measures for decades. We develop a new…
Descriptors: Academic Achievement, At Risk Students, Prediction, Disadvantaged
Forte, Ellen – Council of Chief State School Officers, 2017
Large-scale academic assessments have played a dominant role in U.S. federal and state education policies over the past couple of decades. Among the many validity issues that presently concern test users is the evaluation of alignment among large-scale assessments and the academic content and performance standards on which they are based. This…
Descriptors: Alignment (Education), Measurement, Academic Standards, Educational Policy
Reform Support Network, 2015
This publication summarizes the key discussion from experts in the field of measuring student growth during a convening held February 2015. Experts heard about two emerging approaches to measuring growth: Portfolios of student work samples and unit value-added models that provide teachers with timely and actionable feedback that they can use to…
Descriptors: Evaluation Methods, Portfolio Assessment, Teacher Evaluation, Teacher Effectiveness
Peer reviewed Peer reviewed
PDF on ERIC Download full text
von Wangenheim, Christiane G.; Petri, Giani; Zibertti, André W.; Borgatto, Adriano F.; Hauck, Jean C. R.; Pacheco, Fernando S.; Filho, Raul Missfeldt – Informatics in Education, 2017
The objective of this article is to present the development and evaluation of dETECT (Evaluating TEaching CompuTing), a model for the evaluation of the quality of instructional units for teaching computing in middle school based on the students' perception collected through a measurement instrument. The dETECT model was systematically developed…
Descriptors: Units of Study, Course Evaluation, Case Studies, Evaluation Methods
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  22