TOC Does the placebo effect affect statistical evidence? Your Turn

December 8, 2009

Our Information Age is afloat on lies, I asserted last week. It only takes a cursory glance at current news to rest my case. Look at the badge of a New York patrolman. It may be a fake, writes Ray Rivera (*). Tourists and scientists swearing by statistics (1) be careful. The beefy wearer is probably genuine and not at all enclined to debate why, if the public better believe in both, losing a "dupe" is not as serious as losing the real thing.

Good citizens may still be loath to miss uncovering a con woman impersonating an officer in view of her dupe, a false positive. But the keener they are, the likelier their facing a very irate, legitimate member of the police force, a rather daunting false negative.

Segue now to the the evening of the last US state dinner. When dealing with a member of the general public, select Secret Service agents may have an ever more atrophied sense of humor than lowly foot patrol officers. But with 320 overbearing, overpreening guests of President Obama pressing down at their checkpoints, wouldn't they feel tables have been turned on them? If some of them shrunk from inflicting a potentially career damaging rebuke to someone with friends in the highest place of the land, notice the odds at the White House were even worse than in New-York (2).

Ginger Thompson and Janie Lorber tells us, "members of a Congressional panel" grilled the director of the Secret Service about how two fakes were let in (**). I fully agree with US Representative Peter T. King that the identification process would have been more accurate "if someone from the social secretary's office - which created the guest list - had been working alongside Secret Service agents". But why nobody thought of mentioning pattern recognition? Is it because its teachings are too inconvenient?

Identification errors of some kind are statistically unavoidable but who cares about false negatives? I wish the late Senator Kennedy had been there to tell how he was prevented from boarding a plane as his name was and probably still is, on the No Fly list. Wouldn't he have recommended my solution of letting known in advance how innocent victims are to be compensated? "Instructed to contact their supervisors" when in doubt, the White House agents might not have been so shy then about behaving with the customary courtesy of those in charge of a Boeing 767 boarding process.

We can hardly blame US Representatives for shunning science for theatrics when scientists themselves fall for staging tricks. We said last week that such errant advocates should be recalled as promtply as a car with a faulty steering. As a matter of fact, John M. Broder reports such a measure is under way after light was cast on some dubious data deals on climate change (***). Individuals concerned at the University of East Anglia and Pennsylvania State University have been asked, one to step down, the other to submit to a review.

My three stories may be true but, however sobering, they are about the past and inevitably focus on exposing the culprits. Our real concern ought to be for the future for which, of course, past experience can provide a useful guide. Take for example the case of Vioxx, a pain medication "taken off the market [...] when a study linked the drug to an increased risk of heart attack and strokes". Natasha Singer writes about a call for giving the FDA "more power - and more resources - to provide more detailed drug safety information" (****).

The rational for more data is presented in one of Natasha Singer's sources by Joseph S. Ross and his colleagues (*****). Had the medical community been able to cumulatively access and process results without delay from all trials involving rofecoxib, the product marketed as Vioxx, it would have anticipated the discovery of its untoward side effects by "nearly 3 1/2 years before the manufacturer's voluntary market withdrawal".

"There are so many drugs on the market that it would probably be impossible [to] track all of them, said Dr. Elliott M. Antman, a professor of medicine at the Harvard Medical School". True enough but isn't there an even bigger obstacle? More data does not guarantee more information.

As we explained two weeks ago, statistical evidence is not the same thing as scientific proof. Its usefulness must be mediated by experts trusted by the decision makers as having authority. When stakes are as high as they are in medicine, the issue is that conflicts naturally arise between competing, independent authorities, made the worse by underlying conflicts of interest.

Take Joseph S. Ross' statistical study. Without accusing its authors of any impropriety, it is germane to the debate to recall that the same data, processed according to slightly different modeling hypotheses, will yield a spectrum of possible conclusions. That Merck, the maker of Vioxx, "questioned the methodology of the authors" was only to be expected. But as Laurence J. Hirsch revealed in an article about ghost writing (3), Joseph S. Ross and his co-authors are just as materially interested in the subject. One of them, David S. Egilman, "has testified [...] in more than 100 tort cases (nearly always for the plaintiffs) [...] and, by his own estimate, has earned $2 to $2.5 million for such testimony". (3-1)

Were then Joseph S. Ross' advice implemented as is, it would simply fund a lucrative practice of analysing data in hindsight to prove past negligence by pharmaceutical companies and increase the costs of bringing new drugs to the market. To estimate the cost of such hindsight-driven justice, one has only to turn to the current system of medical malpractice in the US.

Should we therefore put our collective head in the proverbial sand? Better recall medicine has long known that science is often trumped by belief when the ultimate authority is the patient. It may be humbling for science that a patient really gets better on a fake prescription but life is full of such minor miracles. Yet science is not without recourse. Its response to the placebo effect is the double blind drug trial (4)

Let then collect meta data but under three paramount conditions. First keep such real data away from anyone would wants to draw conclusions from it. Second, each time someone proposes to test an hypothesis on it, ask experts in statistics whose independence is guaranteed, to fabricate a second, equivalent dataset which neither invalidates nor confirms the hypothesis (5). Third enroll scientists in a trial and randomly give them access to one of the two datasets, through an intermediary with a copy of both but unaware of which dataset is genuine and which is a neutral fake.

Will scientists' statistical evidence be found to be totally independent of their expectations unless they work for a pharmaceutical company? Or will the placebo effect be revealed for better and for worse to affect yet another human endeavor? Why be afraid of double-blind meta trials?

Philippe Coueignoux

  • (*) ......... The Officer in Uniform Is Real; The Badge May Be an Impostor, by Ray Rivera (New York Times) - December 1, 2009
  • (**) ....... 3 Secret Service Agents Put on Leave in White House Gate-Crashing, by Ginger Thompson and Janie Lorber (New York Times) - December 4, 2009
  • (***) ..... Climatologist Leaves Post In Inquiry Over Leaks, by John M. Broder (New York Times) - December 2, 2009
  • (****) ... Public Database Is Urged To Monitor Drug Safety, by Natasha Singer (New York Times) - November 24, 2009
  • (*****) . Pooled Analysis of Rofecoxib Placebo-Controlled Clinical Trial Data,
    ... by Joseph S. Ross, David Madigan, Kevin P. Hill, David S. Egilman, Yongfei Wang, Harlan Krumholz (Archives of Internal Medicine) - November 23, 2009
  • (1) according to information provided by Ray Rivera, fakes are somewhere between 1 to 10% of the total number of badges
  • (2) 2 fakes out of 320 make the probability less than 1%
  • (3) Conflicts of Interest, Authorship, and Disclosures in Industry-Related Scientific Publication: The Tort Bar and Editorial Oversight of Medical Journals,
    ... by Laurence J. Hirsch (Mayo Clinic Proceedings) - September, 2009
  • (3-1) added 12/08/09 at 18:00 EST: note these numbers have been revised according to an errata by Laurence J. Hirsch
  • (4) see double-blind trials in the wikipedia
  • (5) the creation of ambiguous data is a useful tool of pattern recognition,
    ... see for example Character Recognition Based on Phenomenological Attributes, by Barry Blesser and alii, (Visual Language) - 1973
December 2009
Copyright © 2009 ePrio Inc. All rights reserved.