I just received this enquiry and was about to email my response when it occurred to me that this might be a good topic for discussion on CGA. Look forward to your comments!
We are a little green when it comes to the statistical interpretation
of gait lab data. I have been attempting to look at the effects of two
orthotic interventions on pathological gait. We have completed
ANOVA analysis on 18 gait parameters so far. We were quite pleased
with ourselves, until it was pointed out that our error level
of 5%, effectively meant we had a 1 in 20 chance of interpreting a parameter
as
significantly different; when error was to true cause of
the difference.
It was suggested that it is possible to test the sensitivity
of each parameter, although he was unable to shed any light on how this
is accomplished.
Could you provide any further enlightenment??
Chris
Well, I'm no stats expert either, I'm afraid. But there are basically two types of error:
Type I: when you conclude that the hypothesis is correct (that there is a difference between two groups, e.g. control vs. treated) when it's not. This is the commonest error - caused by chance, as you mention. You can reduce the chance of making your error by reducing alpha from 5% to 1%, but very few people do this in rehab research;
Type II: (also common but not as well recognized) when you conclude that the hypothesis is not proven (i.e. there's no difference between the two groups) but in fact there is a difference. This is usually caused by having insufficient subjects (i.e. low statistical power, or Beta) - very common in our field! Ther are ways to calculate the power needed (and therefore number of subjects needed if you know the "effect size" (the size of the difference that is considered clinically significant - usually derived from a pilot study).
My own personal view is that stats cause more problems that they are worth in rehab research. I'd far rather see people just present the data and let me make up my own mind. Unfortunately, stats have become expected (even though they are usually very dubious because of the small numbers of subjects). The risk is that people just look at the stats and don't look at the data.
Even when you have enough subjects for the various criteria to be satisfied (e.g. normal distributions) conventional (Fischer) statistics can still be quite misleading. If you speak to mathematicians these days they will often laugh when you mention Fischer and tell you that the only reason he did stats this way was because of the limitation in calculating power at the time. Modern statisticians are much more interested in computer simulation studies, which are apparently much more informative.
Chris
--
Dr. Chris Kirtley MD PhD
Associate Professor
Dept. of Biomedical Engineering
Catholic University of America
620 Michigan Ave NE, Washington, DC 20064
Tel. 202-319-6134, fax 202-319-4287
Email: kirtleymd@yahoo.com
http://faculty.cua.edu/kirtley
Figures often beguile me, particularly when I have the arranging
of them myself; in which case the remark attributed to Disraeli would
often apply with justice and force: "There are three kinds of lies:
lies, damned lies and statistics.
- Autobiography of Mark Twain
We researchers use statistics the way a drunkard uses a lamp post - more for support than illumination.
Cheers,
The most commonly used is called the Bonferroni correction which
effectively says the more tests you do the lower the level at which
you should accept statistical significance. Any decent stats book will
guide you through this process as its applied to multiple t-tests.
The principle is the same for ANOVA but I'm not sure whether the technical
details are the same.
A much stronger method is to limit the number of parameters you
look at before you start. Preferably nominate one key parameter in
advance and stick to this - what ever you do make sure you can count
the number of parameters on one hand (and no polydactyly).
How you do this is up to you. You can either use you clinical skill
and judgement to nominate these or do a pilot study, run tests on
all the variables, and use the data to nominate the top five variables
for a definitive trial. The problem with this is that in the present
environment no-one will believe you. Running multiple statistical comparisons
on data is so common that it will be assumed that
you've done all those tests and just reported the good results.
In big scale projects you can now actually pre-declare your primary
outcome measures with the Lancet before you start to ensure
that you don't cheat. This is a little over the top for a most of us mere
mortals though.
Another approach which I've heard proposed recently from a visiting
lecturer from the UK (Dr Jonathan Sterne, University of
Bristol, UK - I gather he's just brought a new book out which it may
be worth looking for) is to move away from assuming that
anything below 5% is significant and anything above is not. Clearly
there's little difference between p=0.0499 and p=0.0501 and its
daft to have a precise cut-off. Sterne would have you look at the
p-values as indicating comparative levels of confidence in results.
This then forms the basis for a balanced assessment of the data and
suggestion of probable explanations (which may include the
suggestion that any particular result is a chance finding). In biomechanics
it is rare that your parameters are ever fully independent and
finding patterns within your significance values amongst related parameter
can be powerful evidence of a real effect rather than an
aberration. Using 5% as a clear cut-off makes the process of science
appear objective but this is a lie. We should accept that the
interpretation of results is subjective and get down to the nitty-gritty
of doing this honestly and intelligently.
Another hang-up of Stern's, which is partially related, and reasonably
well supported in the literature is to focus more on confidence
limits in interpretting data rather than p-values.
I find Martin Bland's An introduction to medical statistics
to be an excellent guide to these issues (although it is a little superficial
in its
treatment of ANOVA). Bland (mostly with Doug Altman) has also written
a number of articles on related issues for the BMJ which
can be accessed easily through his web-site (http://www.mbland.sghms.ac.uk/jmb.htm).
This whole area is a can of worms but you've got no option but to get to grip with it if you want to valid science.
Hope this is useful.
Gait Analysis Service Manager, Royal Children's Hospital
Flemington Road, Parkville, Victoria 3052
Tel: +613 9345 5354, Fax +613 9345 5447
Adjunct Associate Professor, Physiotherapy, La Trobe University
Honorary Senior Fellow, Mecahnical and Manufacturing Engineering, Melbourne
University
Even as we mourn the tragic end of the shuttle Columbia, we can marvel
at NASA's
incredible history of 10 successful manned Apollo missions and more than
100 successful
shuttle missions. In addition to highly trained personnel, spaceflight
requires highly reliable
technology. When NASA talks about reliability, the agency ultimately is
talking about how
long parts function before they are likely to fail. Consider the Apollo
space program:
Some experts have suggested that there were a total of about 2,000,000
functional parts
in the Saturn V rocket, lunar module, and command module. How much error
could have
been tolerated in this complicated array? Even if the reliability for each
part had been
99.9% for its contribution to the mission, the potential existed for about
2,000 parts to
fail—in which case, the command module almost certainly would not have
made it to the
moon and back!
When
we physical therapists talk about reliability, of course, we're talking
about the error
associated with a measurement. Reliability of 99.9% for a measurement used
in physical
therapy would almost always be astoundingly good! We could only dream….
Reliability
can be a critical issue in the planning of a study. It also is a critical
issue in
clinical practice. The Journal has found that, regardless of whether authors
are describing
research or a patient case, the reason why a measurement was used cannot
always be
discerned in the submitted manuscript. Some authors do discuss the selection
of
measurements; other authors have to be asked to do so during revision.
Either way,
however, authors rarely clarify their clinical decision making—clarification
that would
enhance an otherwise superb article.
The
truth is, no measurement is perfect. Whether physical therapists are making
a
diagnosis or determining a change in impairment or disability, all measurements
have some
error associated with them. All of our decisions can be error ridden! And
errors are not
eliminated or minimized by ignoring their presence.
Authors
often try to justify the use of tests with the statement, "The reliability
and validity
of the measurements have been established." Even with a supporting reference,
that
statement is untenable. Reliability isn't like pregnancy. You can say that
a woman is either
pregnant or not pregnant—but you can't say that a measurement is either
reliable or
unreliable. Not only does error always exist, it is context dependent,
and it relates to how
the measurement will be used.
Is
the error so large that using the measurement would be unlikely to provide
useful
information? Both in research and in routine practice, we have to consider
whether the
error could interfere with understanding the results of research or practice.
Unlike
statements of pregnancy, estimates of reliability lie along a continuum,
and we need to
know where along the continuum they lie and what that means for how the
measurement
can be used.
We
also benefit from knowing something about how other authors have studied
the
reliability of the measurement being used. Did other authors study subjects
who are similar
to those currently being described? Did the physical therapists who took
the
measurements in those other studies have training and experience similar
to the physical
therapists taking the measurements in the current study? Were the procedures
similar?
Was the research sufficiently robust in terms of numbers of subjects and
methods that the
estimated error can be accepted as an excellent approximation of the true
error? These
are not esoteric issues. They relate to the practical world in which we
live. And they are
the basis on which clinicians should choose the measurements they use with
patients.
Unless
authors share their thinking about the measurements they used, the concepts
in an
article cannot be developed, and readers are left to imagine what they
should actually
have been told. Instead of saying that reliability "was established," careful
authors say,
"We believe that the measurement was sufficiently reliable to be used because…,"
followed by a logical argument, references, and details. In the Journal's
experience, it
takes only a few sentences to do this right. When the issue requires more
than a few
sentences, that usually means there is complexity, and the paper therefore
will be made
better by addressing the issue and the complexities forthrightly.
In
characterizing estimates of error—specifically, statistics that describe
reliability—many
authors cite experts such as Landis and Koch,1 who contended that values
of kappa
above 80% indicated excellent agreement, values above 60% indicated substantial
levels
of agreement, values of 40% to 60% indicated moderate agreement, and values
below
40% indicated poor to fair agreement. Other authors discuss other statistics.
Unfortunately, what they have in common is an arbitrary method of judgment
that does
not relate to how a measurement will be used.
For
authors, the convenience of being able to "classify" reliability estimates
and then give
them value-laden names is clear. By naming reliability estimates, authors
can discuss them
quickly and "be done with it," claiming that their measurements have been
blessed by the
well-respected authors of the original papers (if those measurements, for
example, reach
excellent levels). The problem is that we have no basis for the classification.
If we were
considering the diagnostic accuracy of two surgeons, for instance, would
we find it
acceptable that there was a 20% chance of disagreement when it came to
the decision to
perform life-threatening surgery? On the other hand, that level of disagreement
about the
necessity to remove a lipoma wouldn't (in my view) be so bad.
Can
we tolerate having the same amount of error in all of our measurements?
If we use a
measurement that has a possible 30% error to determine whether there is
normal
accessory motion at the glenohumeral joint, can we consider that measurement
to be as
useful as a measurement with a possible 30% error that is used to determine
whether we
should refer a patient to a physician in an effort to ward off possible
permanent
neurological damage due to disc disease?
Authors
and clinicians have an obligation to provide an argument as to why any
problems
with a measurement are not sufficiently large to be consequential, and
the amount of error
that we can tolerate depends on what we are measuring and how a measurement
will be
used. The Landis and Koch approach has no context and does not take into
account the
nature of the measurement and the decisions that might be made based on
the use of the
measurements. Context and use are critical issues for both authors and
clinicians, the
difference being that authors must discuss these issues explicitly in submitted
papers,
whereas clinicians must consider these issues in patient management.
Measurements
are not equivalent to aerospace parts, of course, but there is something
that the Apollo space program can teach us about reliability. Because NASA
could not
reduce the error level to "acceptable," they adopted an alternate strategy:
planned
redundancy, usually triple redundancy. They developed so many backup systems
that a
catastrophic failure could occur only when there were multiple failures
of the same system.
When our clinical measurements have more error than we want, the Apollo
example
should remind us that alternate strategies can be developed—but authors
need to explain
these strategies, and, if they did not use any, authors should explain
why.
Journal
authors work hard in conducting studies and documenting practice, and harder
still at preparing and revising their papers. They do their own work a
disservice when they
fail to share their thought process in choosing measurements and other
aspects of their
research methods. The same is true of clinicians who do not elaborate on
why they chose
measurements and interventions in case reports or who practice without
regard to the
quality of their measurements. Ignorance about the error level associated
with
measurements or dogmatic refusal to consider research evidence is poor
practice.
Please
don't view this Note as a statement that reliability is more important
than other
measurement properties—it is not! Validity, specificity, sensitivity, and
a host of other
properties—as well as related topics such as receiver operating characteristics
(ROC
curves)—are equally, if not more, important. The issue for all of these
topics is the use,
and the usefulness, of measurements. We need to justify and explain what
we do, thereby
achieving better articles, better practice, and, in the long run, better
physical therapists.
Jules
M Rothstein, PT, PhD, FAPTA
Editor in Chief
jules-rothstein@attbi.com