NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers130
Practitioners12
Teachers2
Administrators1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 130 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Brogan L. Barr; Virginia V. W. McIntosh; Eileen F. Britt; Jennifer Jordan; Janet D. Carter – Measurement: Interdisciplinary Research and Perspectives, 2024
Even when raters demonstrate agreement in the use of a measure, limited score variability or violation of often-ignored statistical assumptions can result in lower reliability estimates than intuitively expected. This article uses data drawn from two randomized controlled trials of schema therapy and cognitive behavioral therapy for the treatment…
Descriptors: Evaluators, Interrater Reliability, Reliability, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Olivarez, Joseph D.; Bales, Stephen; Sare, Laura; vanDuinkerken, Wyoma – College & Research Libraries, 2018
Jeffrey Beall's blog listing of potential predatory journals and publishers, as well as his "Criteria for Determining Predatory Open-Access (OA) Publishers" are often looked at as tools to help researchers avoid publishing in predatory journals. While these "Criteria" have brought a greater awareness of OA predatory journals,…
Descriptors: Information Science, Library Science, Periodicals, Evaluation Criteria
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Boller, Kimberly; Kisker, Ellen Eliason – Regional Educational Laboratory, 2014
This guide is designed to help researchers make sure that their research reports include enough information about study measures so that readers can assess the quality of the study's methods and results. The guide also provides examples of write-ups about measures and suggests resources for learning more about these topics. The guide assumes…
Descriptors: Research Reports, Research Methodology, Educational Research, Check Lists
Peer reviewed Peer reviewed
Direct linkDirect link
Fraser, Mark W.; Guo, Shenyang; Ellis, Alan R.; Thompson, Aaron M.; Wike, Traci L.; Li, Jilan – Research on Social Work Practice, 2011
This article describes the core features of outcome research and then explores issues confronting researchers who engage in outcome studies. Using an intervention research perspective, descriptive and explanatory methods are distinguished. Emphasis is placed on the counterfactual causal perspective, designing programs that fit culture and context,…
Descriptors: Program Evaluation, Inferences, Intervention, Program Implementation
Peer reviewed Peer reviewed
Direct linkDirect link
Matson, Johnny L.; Gonzalez, Melissa L.; Wilkins, Jonathan; Rivet, Tessa T. – Research in Autism Spectrum Disorders, 2008
The reliability of a new scale to assess Autistic Disorder, Pervasive Developmental Disorder, Not Otherwise Specified (PDD-NOS), and Asperger's Disorder in children was examined. Parents or other caregivers rated symptoms of 207 children between 2 and 16 years of age. The scale, which had 40 items in the final version, correlated highly with…
Descriptors: Autism, Interrater Reliability, Criteria, Psychopathology
Nieminen, Timo A.; Choi, Serene Hyun-Jin – International Journal of Research & Method in Education, 2008
Quantitative behaviour analysis requires the classification of behaviour to produce the basic data. This can be challenging when the theoretical taxonomy does not match observational limitations, or if a theoretical taxonomy is unavailable. Binary keys allow qualitative observation to be used to modify a theoretical taxonomy to produce a practical…
Descriptors: Developmental Disabilities, Behavioral Science Research, Classification, Identification
Peer reviewed Peer reviewed
Cordes, Anne K. – Journal of Speech and Hearing Research, 1994
This paper contends that behavior observation data relating to speech-language pathology are reliable if they are not affected by differences among observers or other variations in the recording context. The theoretical bases of methods used to estimate reliability for observational data are reviewed, and suggestions are provided for improving the…
Descriptors: Data Collection, Interrater Reliability, Observation, Reliability
Peer reviewed Peer reviewed
Hellawell, D. J.; Signorini, D. F. – International Journal of Rehabilitation Research, 1997
Describes pilot studies of the Edinburgh Extended Glasgow Outcome Scale (EEGOS), designed to retain the advantages of the GOS (a measure commonly used in head injury research) but to allow comparison of recovery patterns in behavioral, cognitive, and physical function. Studies show that the interrater reliability of the EEGOS is comparable to that…
Descriptors: Head Injuries, Interrater Reliability, Neurological Impairments, Outcomes of Treatment
Peer reviewed Peer reviewed
Gaudet, Laura; Pulos, Steve; Crethar, Hugh; Burger, Susan – Education and Training in Mental Retardation and Developmental Disabilities, 2002
In this study, self-reports of 34 individuals with developmental disabilities (DD) were compared with proxy ratings from family and providers. Correlations between the ratings of individuals with DD and the proxy raters were low, as were the correlations between family members and providers. In all scales except "cognition," the individual with DD…
Descriptors: Adults, Developmental Disabilities, Evaluation Methods, Interrater Reliability
Ottenbacher, Kenneth J.; Cusick, Anne – Journal of the Association for Persons with Severe Handicaps (JASH), 1991
The study, with 79 rehabilitation therapists evaluating 21 single-subject graphs, found that the low interrater agreement often associated with visual analysis of single-subject data may be improved by simple supplements (such as trend lines) to visually inspected charts. (Author/DB)
Descriptors: Case Studies, Data Analysis, Disabilities, Evaluation Methods
Halpin, Gerald; And Others – 1986
Based upon the assumption that the process of peer review of publications and research is flawed, interrater reliability of reviews of 188 research proposals submitted for funding at a major university was studied. The eight dimensions rated were: (1) significance of the research; (2) clarity and reasonableness of the objectives; (3)…
Descriptors: College Faculty, Evaluation Criteria, Evaluators, Grants
Peer reviewed Peer reviewed
Weinrott, Mark R.; Jones, Richard R. – Child Development, 1984
Examines the tendency of observers to make less reliable recordings of behavorial events when a calibrating observer is absent. Using four different multicategory systems, 26 experienced observers coded 200 hours of videotaped family interactions. Concludes that observers lapse into a less attentive "set" prior to coding without a…
Descriptors: Adults, Behavior Patterns, Behavior Rating Scales, Family (Sociological Unit)
Peer reviewed Peer reviewed
Ingham, Roger J.; And Others – Journal of Speech and Hearing Research, 1995
Four experienced stuttering researchers viewed videodisks of spontaneous speech from chronic stutterers and attempted to locate the precise onset and offset of individual stuttering events. Results showed interjudge disagreements that challenge the reliability and validity of onset and offset judgments. Highly agreed stuttering events were…
Descriptors: Adults, Clinical Diagnosis, Evaluation Problems, Interrater Reliability
Peer reviewed Peer reviewed
Oelschlaeger, Mary L.; Thorne, John C. – Journal of Speech, Language, and Hearing Research, 1999
The Correct Information Unity analysis for measuring the communicative information and efficiency of connected speech was applied to the naturally occurring conversation of a person with moderate aphasia. Results indicated low intrarater and interrater reliability although reliability of word counts was good. Most rater disagreements resulted from…
Descriptors: Aphasia, Case Studies, Communication Skills, Data Analysis
Halpin, Glennelle; And Others – 1986
This study was designed as a reconsideration of the weights used in evaluative decisions made with regard to research proposals submitted for funding at a major state university. The specific objective of the study was to determine whether the actual weights for components used in the evaluation of the proposals differed from a priori weights…
Descriptors: College Faculty, Decision Making, Evaluation Methods, Grants
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9