Optimal vs. Fixed Cutoff frequencies


There were some interesting responses to my recent posting concerning the above topic, both personal and on the net, which warrant a short reaction.

In general, there is agreement that more sophisticated algorithms than "guessing" or otherwise "estimating" cut-off frequencies should be used for the smoothing and derivative computation of noisy data. However, as correctly pointed out by Dr. Scott Tashman and, in fact, repeatedly emphasized in my relevant publications, even highly advanced methods are based on a number of assumptions which may not always be satisfied: Sufficiently high sampling rates are a prerequisite (Hatze, 1981, J. Biomech. 14, p. 14) but can usually be accomplished without problems. High frequency of the noise is also not a problem in applying optimal regularization to Fourier series for derivative computation, provided the ERROR PROCESS IN QUESTION MAY BE REGARDED A WEAKLY STATIONARY STOCHASTIC ONE and UNCORRELATED with the underlying function (see above reference). The exact verification, or otherwise, of this property is tedidous and may not be possible at all for some signals. Here one usually has to rely on the assumption that the parameters of the stochastic process do not change substantially over the period of observation. The example mentioned by Dr. Tashman of the "jiggle" of skin-fixed markers on occasions such as the heel strike was not relevant to the present discussion since such phenomena do NOT belong to the class of RANDOM errors which were the subject of discussion but to that of SYSTEMATIC ERRORS (Hatze, 1990, in "Biomechanics of Human movement"). The latter type of errors is much more difficult to handle and may include both low and high frequency components. Supplementary information (film pictures showing bony landmarks, accelerometer and force plate data, etc.) can be helpful here.

The remark in the posting of Dr. Giannis Giakas that my algorithm for optimal regularization of Fourier series involves "first differentiating and THEN filtering the data" is incorrect, as is evident from equations (A2) - (A9) on page 10 of Hatze (1979), which reference Dr. Giakas cites. The truth is that the optimal regularization parameter alpha is computed by minimizing a certain function which contains the Fourier coefficients of the UNREGULARIZED process and NOT those of any derivatives. Only AFTER the optimal filter function for the respective derivative (zeroth, first, second) has been obtained are the Fourier coefficients of this derivative computed and NOT vice versa, as Dr. Giakas suggested in his posting. This misinterpretation of my algorithm also points to the possibility that the allegedly poor test performance of the RFS may be the result of an inferior or even incorrectly programmed version of the RFS-algorithm because our results obtained with the (improved version of the) respective computer program were always satisfactory when applied to human movement data or to other processes involving periodic phenomena.

For signals that are corrupted by noise resulting from SYSTEMATIC ERRORS whose functional form is unknown (such as the shifting of skin-fixed markers relative to the skeleton due to impact forces, muscle contour changes, etc.), no filtering technique will, in general, be able to remove this type of noise. After all, no algorithm can distinguish between a low-frequency systematic error signal and the signal proper if nothing is known about the nature of the error signal except that it is not random. Choosing a cut-off frequency of, say, 6 Hz for a gait analysis will not be helpful if the data sequences are contaminated by systematic error noise in the frequency range of 4-6 Hz. Only a redundancy of markers, positioned on a segment and permitting least-squares position estimates will in certain cases reduce the errors resulting from skin marker shifting.

In SUMMARY, optimal filtering windows ( CUT-OFF FREQUENCIES) for the removal of STOCHASTIC (HIGH-FREQUENCY) ERROR COMPONENTS from a noise-contaminated signal need not be guessed but can be computed automatically for the zeroth (the function itself), first, and second derivative by the methods described in my posting of 28 May. Care has, however, to be taken that the conditions are satisfied under which a certain method is valid. The SYSTEMATIC (usually LOW-FREQUENCY) ERROR COMPONENTS of the noisy signal can, in general, NOT be removed except they are known to be in the high (or extremely low) frequency range where no components of the signal proper are expected.

All of these problems and techniques are, however, well known since the 1980's so that I still feel that, in this respect, the wheel is being re-invented by the present discussions. But more significant and disturbing is the fact that the indiscriminate use of questionable techniques for the computation of second derivatives in inverse dynamics (motion analysis) leads to grossly erroneous joint force and moment calculations which may have serious consequences, especially in the clinical environment.

Finally, I would like to thank those for their appreciative comments who have contacted me personally. In this connection it has been suggested, that someone should write a concise and in-depth note on this topic for the ISB WWW Site, including examples and interactive software. Maybe this is a good idea.

H. Hatze, Ph.D. Professor of Biomechanics

> In general, there is agreement that more sophisticated algorithms > than "guessing" or otherwise "estimating" cut-off frequencies should > be used for the smoothing and derivative computation of noisy data.

That may not always be true. For instance, one could base the degree of smoothing on how well the data fit with the laws of mechanics. For instance, in a jump from the ground after a run-up (in which there generally is a loss of horizontal velocity during the takeoff phase), one could look for the best compromise in the degree of smoothing that makes the horizontal velocity of the c.m. be close to constant (i.e., almost free of high frequency noise) during the two airborne periods (before and after the last ground-contact takeoff phase), while not allowing the smoothed horizontal velocity curve to change value in the neighborhood of (but outside) the takeoff phase. For instance,

takeoff phase * * * * | | * | | * | | * | | * | | * | | * | | * | | * | | * * * * * * * ^ | This smoothed curve | would not be good, because of oversmoothing.

This curve | would probably be better. | \ / |

takeoff phase * * * *| | * * * |* | | ** | | | | * | | * | | * | | | | * * | * * * | | * * * *

And this one | would be too UNDERsmoothed.. | \ / |

takeoff * phase * | | * * |* | * | * | * * | * | | * | | * * | | | * | | * | | * | * | * * * * *

A similar approach could be used with other mechanical parameters, such as angular momentum about the c.m. (which has to be constant in the air), or the vertical motion of the c.m. (which has to follow a parabola of second derivative equal to -9.81 m/s2, which implies a straight vertical velocity vs time graph with a known downward slope).

The "automatic smoothing" methods generally don't have any built-in information about the laws of mechanics, and therefore cannot use such information when they are choosing a value for the smoothing factor. A human operator CAN make such decisions taking into account the laws of mechanics. It should be possible to make a computer program that will mimic the decision process followed by the human operator, and which will take into account not only the frequency characteristics of the data points themselves, but also the laws of mechanics. However, it would probably be a tough program to devise. I have not yet seen an automatic method that I would trust more than my own visual inspection, although I can see how such a program might become available at some point in the future.

Also, and inevitably, deciding upon a smoothing factor ultimately involves a human's choice at one point or another of the process. This choice could be in the selection of a parameter that is used as input to the program, or it could be inherent in the program itself. Hatze says that (according to Yu and also to Orendurff) Winter's method may lead to oversmoothing, and I have also heard from many people that Woltring's approach generally UNDERsmooths the data. I have not heard any comments about the method devised by Hatze, so I am unable to judge how well it works in practice.

The one thing that seems clear to me is that this topic is not a dead issue!!

Jesus Dapena --- Jesus Dapena Department of Kinesiology Indiana University Bloomington, IN 47405, USA 1-812-855-8407 dapena@valeri.hper.indiana.edu http://www.indiana.edu/~sportbm/home.html

Although I agree with most points of the recent postings by Prof. Hatze, I would like to address the issue of "fundamental flaws". As it was pointed out, there was an inaccurate DESCRIPTION in one of our recent publications of the fact that the derivatives are considered for the determination of the optimally filtered series in ORFOS algorithm. This however does not affect the results of the comparison study, as the original algorithms presented in the literature were used, and not algorithms were developed specifically for this study. As such, this inaccurate description in the INTRODUCTION of the paper does not present a "fundamental flaw" in our view.

Furthermore there are more "fundamental flaws" when some of these so called "automatic" filtering methods are accepted and published on the basis of subjective choices of factors that determine the behaviour of the methods validation using a limited number of test signals and not clear explanation of the required assumptions and limitations of the method. In such cases readers led to believe that any "automatic" method can be applied to any set of data without examining in detail the assumptions and limitations of the method. For this reason comparison studies using a wide range of signals are useful.

Another problem is that when a researcher is not allowed to use an algorithm for a single research study for evaluation-comparison purposes and he/she is expected to purchase a whole software package or even a whole hardware system to access a particular method.

There is also agreement that there is already a large number of "automatic" or "semi-automatic" filtering methods and there is certainly "re-invention of the wheel" when single and subjective methods for the determination of the cut-off frequency are used. The important thing, as it was pointed out before, is to ensure that the assumptions of a particular method are satisfied when applied to a specific set of data.

Personally, I prefer to use a "semi-automatic" method so I have some freedom to alter the parameters of the method than to use the black-box approach of a "full-automatic" method.

Thank you very much for your time.


-- Giannis Giakas Division of SHE Staffordshire University Stoke-on-Trent ST4 2DF

Tel : +44 1782 294292 Fax : +44 1782 747167 Email: g.giakas@staffs.ac.uk http://www.staffs.ac.uk/sands/scis/sport/giannis/gian1.htm

Back to Clinical Gait Analysis home page

Maintained by Dr Chris Kirtley
Last modified on Thursday, 06-Jun-96 14:59:09.