NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 22 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Barry, Adam E.; Szucs, Leigh E.; Reyes, Jovanni V.; Ji, Qian; Wilson, Kelly L.; Thompson, Bruce – Health Education & Behavior, 2016
Given the American Psychological Association's strong recommendation to always report effect sizes in research, scholars have a responsibility to provide complete information regarding their findings. The purposes of this study were to (a) determine the frequencies with which different effect sizes were reported in published, peer-reviewed…
Descriptors: Effect Size, Periodicals, Professional Associations, Journal Articles
Peer reviewed Peer reviewed
Direct linkDirect link
Thompson, Bruce – Middle Grades Research Journal, 2009
The present article provides a primer on using effect sizes in research. A small heuristic data set is used in order to make the discussion concrete. Additionally, various admonitions for best practice in reporting and interpreting effect sizes are presented. Among these is the admonition to not use Cohen's benchmarks for "small," "medium," and…
Descriptors: Educational Research, Effect Size, Computation, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Harrison, Judith; Thompson, Bruce; Vannest, Kimberly J. – Review of Educational Research, 2009
This article reviews the literature on interventions targeting the academic performance of students with attention-deficit/hyperactivity disorder (ADHD) and does so within the context of the statistical significance testing controversy. Both the arguments for and against null hypothesis statistical significance tests are reviewed. Recent standards…
Descriptors: Educational Research, Academic Achievement, Statistical Significance, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Thompson, Bruce – Psychology in the Schools, 2007
The present article provides a primer on (a) effect sizes, (b) confidence intervals, and (c) confidence intervals for effect sizes. Additionally, various admonitions for reformed statistical practice are presented. For example, a very important implication of the realization that there are dozens of effect size statistics is that "authors must…
Descriptors: Intervals, Effect Size, Statistical Analysis, Statistical Significance
Thompson, Bruce – 1997
Given some consensus that statistical significance tests are broken, misused, or at least have somewhat limited utility, the focus of discussion within the field ought to move beyond additional bashing of statistical significance tests, and toward more constructive suggestions for improved practice. Five suggestions for improved practice are…
Descriptors: Effect Size, Research Methodology, Statistical Significance, Test Use
Peer reviewed Peer reviewed
Direct linkDirect link
Vacha-Haase, Tammi; Thompson, Bruce – Journal of Counseling Psychology, 2004
The present article presents a tutorial on how to estimate and interpret various effect sizes. The 5th edition of the Publication Manual of the American Psychological Association (2001) described the failure to report effect sizes as a "defect" (p. 5), and 23 journals have published author guidelines requiring effect size reporting. Although…
Descriptors: Effect Size, Research Methodology, Computation, Data Interpretation
Peer reviewed Peer reviewed
Thompson, Bruce – Journal of Experimental Education, 1993
Three criticisms of conventional uses of structural significance testing are elaborated; and alternatives for augmenting statistical significance tests are reviewed, which include emphasizing effect size, evaluating statistical significance in a sample size context, and evaluating result replicability. Among ways of estimating result…
Descriptors: Effect Size, Estimation (Mathematics), Research Methodology, Research Problems
Peer reviewed Peer reviewed
Thompson, Bruce; Snyder, Patricia A. – Journal of Experimental Education, 1997
The use of three aspects of recommended practice (language use, replicability analyses, and reporting effect sizes) was studied in quantitative reports in "The Journal of Experimental Education" (JXE) for the academic years 1994-95 and 1995-96. Examples of both errors and desirable practices in the use and reporting of statistical…
Descriptors: Effect Size, Language Usage, Research Methodology, Research Reports
Peer reviewed Peer reviewed
Thompson, Bruce – Journal of Experimental Education, 2001
Asserts that editors should declare their expectations publicly and expose the rationale for editorial policies to public scrutiny. Supports effect size reporting and the reporting of score reliabilities. Argues against stepwise methods. Also discusses the interpretation of structure coefficients and the use of confidence intervals. (SLD)
Descriptors: Editing, Effect Size, Reliability, Research Methodology
Thompson, Bruce – 1995
Editorial practices revolving around tests of statistical significance are explored. The logic of statistical significance testing is presented in an accessible manner--many people who use statistical tests might not place such a premium on them if they knew what the tests really do, and what they do not do. The etiology of decades of misuse of…
Descriptors: Editing, Educational Assessment, Effect Size, Quality Control
Thompson, Bruce – 1998
After presenting a general linear model as a framework for discussion, this paper reviews five methodology errors that occur in educational research: (1) the use of stepwise methods; (2) the failure to consider in result interpretation the context specificity of analytic weights (e.g., regression beta weights, factor pattern coefficients,…
Descriptors: Educational Research, Effect Size, Research Methodology, Scores
Peer reviewed Peer reviewed
Thompson, Bruce – Educational and Psychological Measurement, 1995
Use of the bootstrap method in a canonical correlation analysis to evaluate the replicability of a study's results is illustrated. More confidence may be vested in research results that replicate. (SLD)
Descriptors: Analysis of Covariance, Correlation, Effect Size, Evaluation Methods
Thompson, Bruce – 1998
Given decades of lucid, blunt admonitions that statistical significance tests are often misused, and that the tests are somewhat limited in utility, what is needed is less repeated bashing of statistical tests, and some honest reflection regarding the etiology of researchers' denial and psychological resistance (sometimes unconscious) to improved…
Descriptors: Attitudes, Change, Denial (Psychology), Educational Research
Thompson, Bruce – 1999
As an extension of B. Thompson's 1998 invited address to the American Educational Research Association, this paper cites two additional common faux pas in research methodology and explores some research issues for the future. These two errors in methodology are the use of univariate analyses in the presence of multiple outcome variables (with the…
Descriptors: Analysis of Variance, Educational Research, Effect Size, Research Methodology
Thompson, Bruce – 1992
Three criticisms of overreliance on results from statistical significance tests are noted. It is suggested that: (1) statistical significance tests are often tautological; (2) some uses can involve comparisons that are not completely sensible; and (3) using statistical significance tests to evaluate both methodological assumptions (e.g., the…
Descriptors: Effect Size, Estimation (Mathematics), Evaluation Methods, Regression (Statistics)
Previous Page | Next Page ยป
Pages: 1  |  2