Commentary on Alan M Batterham Sportscience 14, 58-59, 2010
(sportsci.org/2010/amb.htm) |

This latest contribution to the pool of resources at Sportscience is an excellent learning, teaching, and research resource valuable for researchers and research consumers at all levels. In the form of a PowerPoint presentation, it may be used for upper level teaching (in whole or in parts) and also serves as a reference source for experienced researchers. The presentation builds on and complements the Magnitude Matters slideshow and the Progressive Statistics article published in January 2009 in MSSE. For me, the crux of the presentation is
on Slide 6, emphasizing the fact that
the right question is not There are three main methods for arriving
at a minimum important difference; anchor-based methods, distribution-based
methods, and opinion seeking. Will notes in the presentation that clinicians
can’t agree on a value for the smallest worthwhile effect and that in the
absence of clinical consensus we need a statistical default. The approach
Will takes is therefore an example of a distribution-based method, in which changes in scores on an outcome are evaluated
in relation to the variability in scores for that outcome (e.g., thresholds
for the standardised mean difference). In anchor-based methods the aim is to
establish the change in the outcome being measured required to result in a
meaningful change on another measure which has already proven to be
clinically or practically important to the individual. For example, a
single-anchor method might involve assessing the change in maximum oxygen
uptake required for people to rate their health-related quality of life (the
anchor) as much improved. In my experience, robust anchor-based approaches are
rare in our field, and a statistical distribution-based default is sensible.
Moreover, some work has suggested a reconciliation of anchor-based and
distribution-based approaches, with a near-linear
relationship between effect size and the proportion of patients benefiting
from a treatment (Norman et al., 2001). My remaining comments relate to specific sections of the presentation. •
Slide 7 gives an example of
two predictors (Strength = a + b*Age + c*Size) with the statement that such
models allow us to work out the “pure” effect of each predictor: "That
is, yeah, kids get stronger as they get older, but is it just because they’re
bigger, or does something else happen with Age? The •
On Slides 15 or 16 it would have been helpful to the reader if there
were a note or link to the source or derivation of the Hopkins scale of
effect magnitudes (for example, the progressive statistics paper), given that
it differs from Cohen’s scale and that this presentation may be the first
stop for some researchers. •
People often get very
confused about the difference between •
It
crossed my mind when you were dealing with distributional issues,
non-uniformity, and transformations that bootstrapping should get a mention
somewhere. Bootstrapping provides trustworthy confidence limits when some of the assumptions underlying the
linear model are violated, including one you didn't mention directly,
independence of the observations. Published July 2010 |