Category Archives: Research

‘More junk getting through to the keeper*’

The whole idea of the peer review process prior to publication is to weed out the junk, so it does not get published. One thing that the alternative therapies have in common is that their journals let a lot of junk science through. Too many studies get published in those journals that should never see the light of day, let alone been conducted so badly in the first place. There are ethical issues at stake in this and the editors of those journals would do well to apprise themselves of publication ethics. Institutional ethics committees or review boards also have a responsibility to prevent bad science from even getting off the ground.

What spurred that little rant was this publication today on ‘The effect of reflexology on the quality of life with breast cancer patients‘ published in the journal, Complementary Therapies in Clinical Practice. They do not get much worse than this one.

It was a study that supposedly randomized 60 people with breast cancer into two groups; one group the control and one group getting reflexology; the aim being to see how it affected their quality of life and symptoms. Sounds good on the surface, but:

They ended up with exactly 30 in each group and reading what they did, they did not randomise – even though they said they did! They just allocated them to different groups depending on the day of the week. The assumption of the statistical tests used are that proper randomization is used. Epic Fail. How can the reviewers and editor not see that randomisation did not take place? Regardless of the results of the study, stop there as without proper randomisation the results are meaningless and can not be trusted. As the study was approved by the “Ethical Committee of the Health Science Institute¬†of Ataturk University”, then that ethics committee needs to look at its decision-making process, as all the work that went into this study and the voluntary participation of the participants was wasted.

I have already blogged about reflexology studies almost always ending up with the exact same number in each group when they are supposed to be randomised and this study just confirms that problem.

And while we could stop there as the data can not be trusted, they then went on and did 38 within groups t-tests! Seriously? DId the authors, peer reviewers and editor of the journal not see an alarm bell go of with that?

  • it was a within groups analysis rather than a between groups analysis
  • do that many t-tests, just by chance you will get a statistically significant result
  • no hints of a Bonferroni correction because of the multiple tests.

The author’s conclusion of¬†Reflexology was found to reduce the symptoms experienced by breast cancer patients, while at the same time increasing the functional and general health status simply can not be supported by the data from this study … yet it still made it through to the keeper.

Having said that, a damn good foot massage will probably make anyone with a chronic illness feel better; it is that, that is NOT reflexology and that is NOT supported by the results of this study.

Please sign up for my newsletter when a new content is posted:




*for those not familiar with the metaphor: “through to the keeper”; it comes from cricket when the batsman does not even attempt to play at the ball and lets it pass through to the wicket keeper unchallenged.

 

Craig Payne
University lecturer, runner, cynic, researcher, skeptic, forum admin, woo basher, clinician, rabble-rouser, blogger, dad. Follow me on Twitter, Facebook and Google+

More non-translatable foot orthotic research


I am having a bad weekend commenting on bad research. There were these two dumb studies on Homeopathy for Heel Spurs and this one on the non-existent anterior metatarsal arch. In the Clinical Biomechanics Boot Camp I really try to focus on the practical application of research, so really look for research that is translatable to clinical practice. If it’s not translatable, then what was the point of doing it? There is way too much foot orthotic research being done lately that is not translatable, wasting resources and not providing clinicians the sort of information that they need to do it better.

What brought this up for me today was this study in quite a prestigious online journal (PLoS ONE) that really tells us nothing. The only thing I get from this study is I can add it to the list of studies I use when trying to illustrate how not to do foot orthotic research.

The purpose of the study was to look at the effects of different arch heights on rearfoot and tibial motion and they found no systematic effects on eversion excursion or the range of internal tibia rotation. I have no problems with the design or analysis of this study.

What I have a problem with is the choice of foot orthotic design in the subjects used in the study:

  • all the subjects had “normal foot flexibility, ankle ROMS, normal arch height and the absence of any foot pathologies or deformities“, so are not the sort of people who would normally get foot orthotics in clinical practice. What was the point of doing that for?
  • the study focused on one design feature (arch height) and looked for generic effects of that and did not look at the effects of that design feature in the people that the design feature is designed for. What is the point of doing that for? I have no idea why. They should have tested the design feature in those who need that design feature (or subdivide the participants into those who would have that design feature indicated clinically and those that do not, so we can see if the design feature really does what we think it might do in those that do and do not need it – now that then would have been translatable research; that would have increased our understanding of the effects of different foot orthotic design features).
  • the choice of the particular design (arch height) being tested on the parameters they were looking at (rearfoot eversion and tibial rotation) is somewhat odd, as I would not have expected them to not have much effect, so why choose that to measure? A medial arch design of the types used by the authors in the study just inverts the forefoot on the rearfoot, so why would it affect the rearfoot? It probably does, but any effect of arch support design features on the rearfoot and more proximal structures has to be mediated via the midfoot joints first and how much effect they have will depend on the range of motion of those joints. The authors did not look at that or control for that. Any effect that the particular design feature has on the rearfoot or proximally will also be dependent on the location of the subtalar joint axis. Given the variability of that axis, how much of that arch support design is on the medial side of the axis? In some of the participants, the arch support would have been on the lateral side and have the opposite effect. The authors did not look at that or control for that. The windlass mechanism is an important natural way that the foot supports itself and has significant impacts on the parameters that the authors measured. If the plantar fascia was prominent in some participants, then the arch support design feature used by the authors would have interfered with the windlass mechanism. The authors did not look at that or control for that. These issues of what is the design feature for, the position of the subtalar joint axis, and the windlass mechanism would probably explain why they found no systematic effect.

The only positive I take from this study is that we need more studies that test individual design features and not just generic “foot orthotics”; those design features need to be tested on the populations that clinicians think that those design features are indicated for to see if they do what clinicians think they do. No problems testing them in populations who they are not indicated for, as long as they are compared with the populations that they are indicated for.

The above study also brings into focus about the parameter(s) that a study looks at. The above authors looked at the impact of arch support designs on rearfoot and proximal factors. What clinician with a good understanding of foot orthotics uses arch support design features to change those parameters? The study should measure the parameters that the design feature is used clinically to try and change, to see if it really does change it or not. That is translatable research.

There is no doubt that there is a divide between what researchers think is translatable foot orthotic research and what clinicians think they can use to implement into clinical practice. The clinician is the one that actually has to use it.

Wahmkow, G., Cassel, M., Mayer, F., & Baur, H. (2017). Effects of different medial arch support heights on rearfoot kinematics PLOS ONE, 12 (3) DOI: 10.1371/journal.pone.0172334

Craig Payne
University lecturer, runner, cynic, researcher, skeptic, forum admin, woo basher, clinician, rabble-rouser, blogger, dad. Follow me on Twitter, Facebook and Google+