I am far from being any sort of infectious disease, emergency medicine or pharmacology expert and way out of my lane on this one, but if ‘Karen from Facebook’ can have a view, then why can’t I? What I do think I am good at is reading, evaluating and critiquing published scientific research as well as evaluating consensus among real experts and not just those who have a YouTube channel, and it is that which informs what I am writing about here. Observing all the political shitfuckery that has gone on around this has also been fun as well as informing and there is a lot to be learnt from the whole episode.
Hydroxychloroquine is in a class of medications that were first used to treat malaria, but are now more commonly used as a disease-modifying anti-rheumatic drug (DMARD) to treat conditions like rheumatoid arthritis, lupus, childhood arthritis and some other autoimmune diseases. In March 2020, the then USA President, Donald Trump touted hydroxychloroquine as a ‘game-changer’ for COVID-19. We now know that it was not even close to being that. How did it end up there?
Chilblains are generally not that responsive to treatment with most interventions having some effect, but no one intervention really curing them or having any great consistent affect. Lots of people have opinions and preferences for treatments, most of which have not yet been shown to do any better than a placebo. When there are no definitive treatments shown to work for a condition, then the wide range of anecdotal recommendations and choices to treat will contain many treatments that simply can not work and if they do appear to work, then its more likely to be the natural history rather than the treatment.
Today, Google scholar came out with their 2018 update to their ranking metrics. No point in me re-litigating what Google says about them, so read Google info. The rankings are not without some controversy and there are competing ranking metrics of journals. Each different ranking method put emphasis on different criteria and weight different criteria differently.
I checked their database for the ranking given to the podiatry and related journals and compiled this list:
The concept of foot orthotic dosing is something that has been bubbling away under the surface for a long time now, but for some reason, not a lot of noise gets made about it, or when noise is made about it, tends to get dismissed by those who want to protect the way they did things.
To introduce the concept, consider this hypothetical analogy: what if a really well conducted clinical trial was done on a very low dose of an anti-hypertensive drug and it shows that the drug does not work at that dose. Should that be used as evidence that the drug is not effective? Of course it shouldn’t, but that is exactly what is done with clinical trials of foot orthoses at low doses. As the methodology and analysis of that hypothetical drug trial was sound, should it be included in the systematic reviews and meta-analyses? It will meet all the textbook criteria to be included in a systematic review and meta-analysis, but, of course, it should not be included as the dose was low. To include it would probably be unethical as it would unreasonably bias the systematic review and meta-analysis in the direction of the drug not working (unless the review stratified the study results into different doses). It makes sense to exclude that study because of the low dose. So, why then is it acceptable to do exactly that in systematic reviews and meta-analyses of foot orthoses?
The whole idea of the peer review process prior to publication is to weed out the junk, so it does not get published. One thing that the alternative therapies have in common is that their journals let a lot of junk science through. Too many studies get published in those journals that should never see the light of day, let alone been conducted so badly in the first place. There are ethical issues at stake in this and the editors of those journals would do well to apprise themselves of publication ethics. Institutional ethics committees or review boards also have a responsibility to prevent bad science from even getting off the ground.
What spurred that little rant was this publication today on ‘The effect of reflexology on the quality of life with breast cancer patients‘ published in the journal, Complementary Therapies in Clinical Practice. They do not get much worse than this one.
It was a study that supposedly randomized 60 people with breast cancer into two groups; one group the control and one group getting reflexology; the aim being to see how it affected their quality of life and symptoms. Sounds good on the surface, but:
I am having a bad weekend commenting on bad research. There were these two dumb studies on Homeopathy for Heel Spurs and this one on the non-existent anterior metatarsal arch. In the Clinical Biomechanics Boot Camp I really try to focus on the practical application of research, so really look for research that is translatable to clinical practice. If it’s not translatable, then what was the point of doing it? There is way too much foot orthotic research being done lately that is not translatable, wasting resources and not providing clinicians the sort of information that they need to do it better.
What brought this up for me today was this study in quite a prestigious online journal (PLoS ONE) that really tells us nothing. The only thing I get from this study is I can add it to the list of studies I use when trying to illustrate how not to do foot orthotic research.