Log in

No account? Create an account
shpalman [userpic]

Let me come with you, I can see... I can see perfectly...

18th April 2009 (14:24)

[BPSDB] Recently I have been mostly reading “Homeopathic Pathogenetic Trials Produce Specific Symptoms Different from Placebo1 and at first sight the result looks very interesting: homeopathic remedies or placebo were given to healthy volunteers in a double-blinded manner, “proving symptoms” were assigned to their remedies by a materia medica expert, and it turned out that the symptoms matched the remedies the participants were taking. In fact they matched very well: out of the 165 symptoms experienced by the 25 participants during the four day trial, on average the participants in the two groups taking different homeopathic remedies experienced five or six symptoms specific to those remedies each while the participants in the placebo group experienced about 10—11 “non-specific” symptoms each. The number of inconsistent symptoms experienced in each group (i.e. non-specific symptoms or symptoms associated with the wrong remedy experienced in the homeopathy groups) was zero. These impressive results even contradict some of the rubbish results in the literature.2,3 Out of the discussion at JREF, where you can find many quotes from the paper which I don't want to reproduce here, I've developed the following thoughts.

Here's a summary of what is supposed to have happened in the paper, as I understand it.

  • HM produces list of 20 remedies and selects 25 participants out of 59 trainee homeopaths;
  • IP chooses two of the 20 remedies “at random” (salt and arsenic) to give to the selected participants, prepares remedy and placebo pills and sends them to the study centre in randomly numbered containers;
  • RS assigns random numbers to participants according to software - “code was kept safely by the study centre”;
  • HM as “proving director” conducts interviews with participants to verify symptoms noted by participants in their diaries;
  • ?? collates symptoms across all participants in a “head-to-foot” scheme;
  • ?? sends list of symptoms (disconnected from participants) and names of the two remedies used to MM;
  • MM collates symptoms with the two remedies or the placebo using software;
  • ?? collates symptoms, now labelled according to remedies by MM, with the participants they came from;
  • RS perform statistical analysis of symptom collation;
  • ?? breaks blinding and performs final analysis.

The protagonists are

  • HM: Study director (Heribert Möllinger, first author)
  • HW: Study designer and overseer, and editor of Forschende Komplementärmedizin (Harald Walach)
  • IP: Independent pharmacist (at Dolisos, Lausanne)
  • RS: Independent researcher (Rainer Schneider, co-author); HW and RS, while reporting different affiliations, were both supported by the Samueli Institute
  • MM: Materia medica expert (Reimund Wagner, acknowledged)
  • ??: For when it's not obvious who did it.

Now firstly, it seems like the blinding was designed to make sure that MM would not be able to cheat in the assignment of remedies to participants, since he only received a list of disconnected symptoms. However, I'm not sure exactly who it was who reconnected the symptoms, with their guesses as to which remedies “caused” them, back with the list of participants. But we know that the blinding code, created by RS, “was only fully revealed once the database of symptoms was classified by the materia medica expert and the statistical analysis had been done blindly...” by RS. I think it would have been better if IP had created the blinding code and not sent it to the study centre until after the statistical analysis. This is the sort of thing which peer-review is supposed to spot, in the cases when one of the authors isn't the editor of the journal anyway.

So there's the possibility that RS could have just changed the which patients certain symptoms were supposed to have come from, once the symptoms were labelled with remedies. I don't know if he could have done this without HM (the lead author, who collected the symptoms from the participants via diaries and interviews) noticing - given the different affiliations, did HM ever see the list of assigned symptoms? There's no innocent explanation for how this can have come about that I can think of, and I'm not saying that this is what happened, just that the existence of this possibility is a weakness of the trial design. HW believes in magic,4 maybe RS's computer fixed the results on its own. And somehow came up with 0±2 for the number of inappropriate symptoms in each group; how can you add only positive integers together and end up with 0±2? And what kind of statistical analysis could have been done blindly anyway?

Secondly, there were 7 participants receiving placebo who recorded an average of 11 non-specific symptoms each, so that's about 80 non-specific symptoms out of the reported total of 165. The 10 participants on Nat. mur. had 5 symptoms each so that's about 50; the 8 on Arse. alb. has 6 each so that's also about 50. It adds up to 180 which isn't much over the 165 reported. Given that MM knew what the participants had been given, it seems about right that he would assign the symptoms in the proportions 30%:30%:40% for salt:arsenic:placebo, guessing roughly equal numbers of participants in the three groups. But then if RS is deliberately reassociating symptoms, now assigned to remedies, with the participants who got those remedies you might indeed get the strange result that all the non-specific symptoms, which you would expect to have been at a similar “background” level for all groups, have been assigned to the placebo group which therefore had double the number of symptoms compared to the groups on homeopathy.

So we're left with a result which is too good to be true and the knowledge that the co-author who generated the blinding codes was the same one who did the “blind” analysis on the results. Excuse me if I don't start tearing up my physics, biology and chemistry textbooks just yet.free hit counter javascript

  1.  H. Möllinger, R. Schneider, and H. Walach, Forsch. Komplementmed. 16, online (2009).
  2.  A. Vickers, R. McCarney, P. Fisher, and R. van Haselen, Brit. Homeopathy J. 90, 126 (2001).
  3.  H. Walach, J. Sherr, R. Schneider, R. Shabi, A. Bond, and G. Rieberer, Homeopathy 93, 179 (2004).
  4.  H. Walach, Brit. Homeopathy J. 89, 127 (2000).

This document was translated from LATEX by HEVEA.


Posted by: ((Anonymous))
Posted at: 19th April 2009 20:47 (UTC)
Bad link

Excellent critique, but it looks like you've got the DOI link wrong!

Posted by: shpalman (shpalman)
Posted at: 19th April 2009 21:50 (UTC)
Re: Bad link

I know it doesn't work, I expect it will be enabled when the article gets published properly. It's available at the rather messier link of http://www.online.karger.com/ProdukteDB/produkte.asp?Aktion=ShowAbstract&ArtikelNr=209386&Ausgabe=247634&ProduktNr=224242 instead.

Posted by: ((Anonymous))
Posted at: 20th April 2009 12:43 (UTC)
Statistical analysis


Regarding the 0+/-2 issue, given the likely skewed nature of the number of symptoms per patient, it doesn't seem unreasonable to me that you would get an arithmetic mean close to zero, but a SD around 2. Obviously, if you then treat a Poisson-distributed (integer) random variable with small mean like a Normally-distributed continuous variable, the resulting confidence intervals will be arithmetically correct, but practically meaningless.

Regarding the blinded analysis, presumably they mean that RS was given the data under three group labels (say A, B, C), which would allow most of the analyses to be done without knowledge of which group was which. Unblinding would then just be a question of saying A=Salt, B=Control, C=Arsenic. That said, any pre-planned pairwise comparisons to control would reveal the identity of the control group under that scheme, also if RS knew how many individuals had ended up in each group... However, the general approach is usual for (two-group) clinical trials, where interim results are sometimes looked at by the researchers (in the coded/ blinded form).


Posted by: shpalman (shpalman)
Posted at: 23rd April 2009 12:02 (UTC)
Re: Statistical analysis

So what sort of set of numbers would give 0±2?

The data look so obvious from here that RS would have been able to guess which remedy a particular participant were in. However there is no mention that anyone knew even which group any participant was in before the blinding was broken. Given RS that information would have involved someone else breaking the blinding, collating code numbers between patients and phials, sorting the patients into groups, and then communicating the group information to RS. Given that everything else in the paper is so rigorously explained, I don't see how all this would have gone without saying.

Posted by: ((Anonymous))
Posted at: 23rd April 2009 12:50 (UTC)
Re: Statistical analysis

Well, if (say) out of a group of 10 people, 9 have 0 symptoms, but 1 has 6, you end up with a mean of 0.6 and a SD of 1.9. Do a bit of clumsy rounding, and there you are... The 0+/- 2 is of course completely invalid for these data as an approximate confidence interval, but that kind of thing never stops this type of mistake making it through peer review.

As far as the group information goes, you're probably right that there's something odd about the description of blinding versus analysis etc. I just thought, to be fair, I would mention that there's no reason (in general) why blinded analyses can't be carried out.


Posted by: ((Anonymous))
Posted at: 16th December 2009 17:19 (UTC)

Real (homeopathic) medicine cures even when Conventional Allopathic Medicine (CAM) fails

Posted by: shpalman (shpalman)
Posted at: 16th December 2009 17:28 (UTC)
Re: Homeopathy

No, it doesn't.

7 Read Comments