Oh, the cognitive dissonance

Just put together a video for the Clinical Biomechanics Boot Camp on ‘Clinical Practice Can be Deceptive’ wrestling with an issue I can’t quite get my head around. Sometimes writing about it helps me think clearer, so here goes:

Foot orthotics for Achilles tendinopathy:
Clinical experience showing they help – check
Theoretical and plausible mechanism as to how they might help – check
Lab based studies showing they do reduce load in the Achilles tendon – check
Case series and uncontrolled studies showing they work – check
Good, well controlled RCT’s showing they don’t work – damn!

Lateal Wedging for Medial Knee Osteoarthritis:
Clinical experience showing they help – check
Theoretical and plausible mechanism as to how they might help – check
Lab based studies showing they do reduce the external adduction moment – check
Case series and uncontrolled studies showing they work – check
Good, well controlled RCT’s showing they don’t work – damn!

Seen that pattern before?

Did writing about this help me think clearer? nope … damn!

Homeopathy data dredging

Homeopathy does not work and can not work. The evidence is clear; and there is plenty of that evidence. It is no better than a placebo. Any ‘clinical’ effect of it is due to that placebo effect. I won’t get into it all the details here, but if you want more check this out: How Does Homeopathy work?.

That does not stop those who try to defraud the consumer with homeopathy from grasping at straws and coming up with implausible and improbable mechanisms as to how it might work (it doesn’t) and grasping at some badly done flawed studies published in a low or no impact factor journals, and ignore all the well done properly blinded and controlled studies published in high impact factor journals. And when that argument does not work, they come up with some sob story or special pleading that this is not the appropriate way to clinically test homeopathy (it is).

One way you can get a ‘not quite correct’ result in a clinical trial, is that you just collect a lot of different outcome measures and by chance, one or two of them might be statistically significant, but it is highly likely a result by chance and not due to a real effect. To get around this problem it is now standard practice to register clinical trials in advance and state what your primary endpoint measure is. Almost all major medical journals have for many years now required this a priori registration of clinical trials before they will accept a paper for publication. One of the aims is to prevent the sort of data dredging that could go on until a set of results is found the confirms the preconceived biases of the researchers. The analysis of the data should primarily stick to what was a priori specified and registered.

What drew my attention to this today, was this study that looked at the publication of homeopathy studies registered on ClinicalTrials.gov. After 2 years only just under 50% were published. There could be many reasons for that, but what was of most interest to me in the results was that of those that were published, a quarter altered their primary endpoint from what was in the a priori registration! That should set up big alarm bells. That means they went looking in the data for a better result than what they would have got with the a priori specified endpoint! That is really dodgy and should not pass the ‘sniff’ test, yet still got published.

This sort of nonsense goes on a lot with clinical trials on alternative medicine. I already blogged about how randomized controlled trials on reflexology more often than not end up with the exact same number in each group which is really hard to to if you randomize properly, which means they were not randomizing properly!

I will finish with this cartoon. Those familiar with the nonsensical proposed mechanism of homeopathy working via molecular memory, will appreciate why homeopathy is still full of shit.