shpalman (shpalman) wrote,

If at first you don't succeed

[BPSDB] Otto Weingärtner explains, in the latest issue of J. Alt. Complement. Med.,1 that in clinical trials more accurate results come from those trials which have larger numbers of participants, supporting the methodology of Shang et al..2 Shang et al. ranked trials of both homeopathy and proper medicine according to the “quality” and number of participants, and found that better quality trials of homeopathy with larger numbers of participants tended to show smaller differences between homeopathy and placebo. This is in accordance with Bernoulli's “weak law of large numbers” which explains how data scatters randomly about the true value but the mean converges to be as close as you like to the true value as you obtain more and more data. By taking more and more data, by performing trials with many participants and by performing meta-analyses to pool the results of trials, the effects of random scatter are slowly averaged away.

Of course, that's not what Weingärtner thinks that he has explained. He rather thinks that he has come up with some way to excuse the problems with reproducibility that homeopaths think they have. It doesn't help that homeopaths either don't understand, cherry-pick, or move the goalposts of trials, seeming to describe trials which show that homeopathy is indistinguishable from placebo as “inconclusive” and only accept as “negative” trails which show that homeopathy is worse. It also doesn't help that the “positive” trials, which tend to be those of low quality or with fewer participants as well as a few statistical blips (5% of p<0.05 trials come out positive just out of luck, by definition), are all added together with equal strength and set against the “negative” ones. If there are more “positive” trials than “negative” ones then they claim homeopathy works. What they should of course be doing is a meta-analysis to look for the strength of the effect (not just its yes/no existence) which of course turns out to be vanishingly small (so that if there were an effect, which there isn't, but if there were, it would be so small so as to be useless) when investigated properly. But stronger evidence that this would be required to accept something as implausible as homeopathy, as Le Canard Noir explains in a post which turns out to be hardly about homeopathy at all.

No, if a homeopath fails to reproduce the possibly anecdote-level (or possibly made up)3,4 positive results which confirm their delusion, then they don't start to revise their worldview but go looking for someone with a scientific qualification to copy something out of a textbook and pad it out it with nonsense. Weingärtner's abstract and introduction claim that he will describe an “experimental situation for distinguishing a homeopathic potency and its solvent” but this of course isn't the discredited5 Rao et al.6,7 and there is no experimental content; he assumes that “every effort is made to exclude artifacts, systematic errors, and things like that” while not actually explaining how one goes about doing so (that's what double blinding and randomization are for, in the presence of systematic errors the average of many measurements does not actually tend to the correct value). The paper apparently will present ideas from a previous paper in “more detail” and will discuss “nonlocal effects” in “a more comprehensible way” and having read it I can't imagine the low level of detail and incomprehensibility of the previous paper, which is in German in a journal called “Zeitschrift für Anomalistik” and therefore, frankly, worthless.

It's not worth going through the “maths” in the paper, unless you consider it a great insight that an experimental result depends on both the experiment itself and on the external factors which influence it, so that you won't get the same result from the same experiment unless the external factors are the same. That's why experiments are repeated, so that the external factors average away.

This article has prompted two editorials, one from Alex Hankey and one from Lionel R. Milgrom,8,9 and if they can spin this out to three articles then so can I: Golden Balls.


The remarkable claims made in Nature (333, 816; 1988) by Dr. Jacque Benveniste and his associates are based cheifly on an extensive series of experiments which are statistically ill-controlled, from which no substantial effort has been made to exclude systematic error, including observer bias…The phenomenon described is not reproducible in the ordinary meaning of the word... Among other things, we were dismayed to learn that the salaries of two of Dr. Benveniste's coauthors of the published article are paid for under a contract between INSERM 200 and the French company Boiron... We conclude that the claims made by Davenas et al. are not to be believed.

John Maddox, hit counter javascript

  1.  O. Weingärtner, J. Alt. Comp. Med. 15, 287 (2009).
  2.  A. Shang, K. Huwiler-Müntener, L. Nartey, P. Jüni, S. Dörig, et al., The Lancet 366, 726 (2005).
  3.  E. Davenas, F. Beauvais, J. Amara, M. Oberbaum, B. Robinzon, A. Miadonna, et al., Nature 333, 816 (1988).
  4.  J. Maddox, J. Randi, and W. W. Stewart, Nature 334, 287 (1988).
  5.  M. Kerr, J. Magrath, P. Wilson, and C. Hebbern, Homeopathy 97, 44 (2008).
  6.  M. L. Rao, R. Roy, I. R. Bell, and R. Hoover, Homeopathy 96, 175 (2007).
  7.  M. L. Rao, Homeopathy 97, 45 (2008).
  8.  A. Hankey, J. Alt. Comp. Med. 15, 203 (2009).
  9.  L. R. Milgrom, J. Alt. Comp. Med. 15, 205 (2009).

This document was translated from LATEX by HEVEA.
Tags: alex hankey, badscience, bpsdb, lionel milgrom
  • Post a new comment


    default userpic

    Your reply will be screened

    Your IP address will be recorded