I find it bizarre that doctors and scientists have accepted the ISO/UKAS accreditation philosophy without any proof that it works.
They should know better.
There are three strands to clinical science.
1. Speculative science that generates new hypotheses.
2. Academic and clinical experimental science that explores and validates new hypotheses.
3. Routine diagnostic science that requires precision, accuracy, reproducibility and clinical interpretation.
Much of science is correct and has brought large technological changes that have many positive aspects. However, all of these strands are subject to subversion and uncertainty. Everything rests on whether samples are representative of the totality of all possible samples.
Error is often accidental. Here’s a description of a good way to do it deliberately: don’t publish the results that don’t suit you.
Andrew J. Vickers, PhD
NEW YORK – According to a meta-analysis recently presented at the annual meeting of the Association for Clinician Researchers, failing to submit a paper for publication is of proven benefit for negative research results. Primary author Anthony Brown, an endocrinologist from the University of Connecticut, Storrs, reported the results of an extensive meta-analysis comparing a number of different treatment options. “Although some have claimed that a carefully argued discussion section, or a hard-hitting press release, can ameliorate the effects of negative results, the data are quite clear,” said Dr. Brown. He also stated: “In head-to-head comparisons, nothing beats not having a paper in the first place,” In an interview, Eileen Williams, senior author, concurred: “Researchers have long worried whether burying discomfiting data in a desk drawer was really effective. Our research should really end the debate; what other people don’t know can’t harm you.”
Participants expressed a sense of relief that common practice could finally be placed firmly on evidence-based footing. Eric James, Chair of Nuclear Medicine at Yale University, New Haven, Connecticut, said that, previously, all that researchers had to go on was anecdote: “I’ve previously told colleagues about personal experiences when our research came back with the wrong results and we didn’t publish. But it is so much more effective to cite hard data showing that putting pressure on the statistician isn’t half as effective as pretending you never did the study in the first place.” Dr. Irene Jenkins, a researcher who has mentored countless clinical researchers at Johns Hopkins University, Baltimore, Maryland, agrees. Dr. Jenkins stated that “I often have junior faculty come to me distressed over a nonsignificant P value. I generally advise 1 of 2 basic approaches: Either you can insist to co-authors that negative results simply aren’t publishable, or you can describe them as ‘difficult to interpret,’ and that you need time — lots and lots of time — to consider the data carefully in the light of existing research. Of course, this was just a judgment call on my part. Now I have clear evidence supporting what I have been doing for years.”
One aspect of Dr. Brown’s meta-analysis that generated particular interest was the finding that countries differed in their tendency to avoid publishing high P values. Several papers in the literature review clearly indicated that although US researchers sometimes published negative results, the Chinese journals almost always reported statistically significant findings. “This is just one more way in which we are falling behind the Chinese,” said Dr. Brown.
Nonetheless, some investigators expressed reservations about the new findings. John Waldin, a doctoral candidate at McGill University, Montreal, Quebec, Canada, said that he and coworkers had not found failure to submit statistically superior to alternative strategies for dealing with negative results. According to Waldin, techniques, such as selective data reporting, unplanned subgroup analysis, and data dredging, can be equally effective — especially if done carefully. That said, Waldin did concede that his doctoral advisor had told him that his findings were probably not worth publishing, given that his P values were all > .05.
Meanwhile, Dr. Brown is all set to submit his work for publication, and is confident that he has a good chance of getting accepted by a high-impact journal: “Many of the best journals agree with us that the American public need to be protected from high P values. Unless the research is about something we don’t like, of course.”
With apologies to The Onion
Here are lots more ideas. Remember: “claimed research findings may often be simply accurate measures of the prevailing bias.”
Corollary 2: The smaller the effect sizes in a scientific field, the less likely the research findings are to be true.
Corollary 3: The greater the number and the lesser the selection of tested relationships in a scientific field, the less likely the research findings are to be true.
Corollary 4: The greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true.
Corollary 5: The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true.
Corollary 6: The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true.