How to Spot a Scientific Dud

Even with science reports, it’s worth remembering the adage: If it seems too good to be true, it probably is.

“Nuclear Fusion in a Test Tube” announced the Financial Times on 23 March 1989. Seven years later, CNN ran a story titled “Ancient meteorite may point to life on Mars”, complete with sub-heading “Biggest discovery in the history of science”. In each case, the reports were part of a flurry of worldwide media attention, of the kind normally reserved for royal births and movie star shenanigans rather than scientists’ announcements.

            Alas, despite the hoopla, neither report proved to be true. Claims of deuterium (heavy hydrogen) atoms fusing together in a tabletop experiment have been widely discredited. Likewise, the scientific community is highly sceptical that tiny structures in a meteorite from Mars are fossilised microorganisms.

            These cases show that even with science reports, it’s worth remembering the adage: If it seems too good to be true, it probably is.

            I came across another spectacular-if-true situation some years ago, when magazine editors suggested I might write about a Chinese report of DNA being recovered from a dinosaur fossil. Especially in the wake of the movie Jurassic Park, this could make for a huge story. “That’s impossible!” I announced – or words to that effect, based on some knowledge of fossilisation and organic molecules. It was indeed another case of researchers being fanciful.

            Yet issues with published science are rarely so clear cut. At one end of the spectrum are subtle issues, which only other scientists may spot – as may be the case with experiments by Johann Gregor Mendel, a 19th century monk now known as the “Father of Genetics”. Mendel grew peas, noting how characteristics were depended on the parent plants. He had a firm grasp of the principles involved, and this may have guided his results, which some people have since argued were insufficiently random.

            Mendel’s notebooks were burned, but the notes of Robert Millikan have been scrutinised over a similar controversy. Millikan performed an “oil drop experiment” to determine the charge of an electron, and won the 1923 Nobel Prize for Physics. He has been accused of performing “cosmetic surgery” on his data, rejecting observations so his overall result would appear more accurate.

            But at least Mendel and Millikan aimed for valid science. Occasionally, there are cases of downright fraud. One of the most famous was the Piltdown Man skull, which was made public in the UK in 1912, and hailed as a “missing link” between apes and humans. There was scepticism, and in 1953 it was proved to be a forgery, created

from a human cranium and an orang-utan lower jaw that were a few hundred years old, along with fossil chimpanzee teeth.

            No one knows just who created Piltdown Man. But the scientist behind a recent famous fraud was a Korean scientist hailed as a national hero: Dr. Hwang Woo-suk. In 2005, he was leader of a team that claimed to have extracted material from cloned human embryos that were obtained from 11 people. This came soon after Hwang claimed to have achieved the first cloning of a human embryo. After a colleague said the research was faked, a panel investigated and announced, “This is a serious wrongdoing that has damaged the foundation of science.”

            Though the full mix of reasons for the scandal were unclear – with enthusiastic government support playing a role, one was surely the fact that the scientific world seeks novel results. This in turn means there is rarely reward for verifying someone else’s work.  

            Which is a pity, as inaccurate results slip through even supposedly rigorous peer-review processes. In an article noting that two pharmaceutical companies had tried reproducing 110 studies, and achieved the same results in less than a fifth of them, The Economist bleakly commented, “There are errors in a lot more of the scientific papers being published, written about and acted on than anyone would normally suppose, or like to think.”

            I’m not so pessimistic about the overall situation, though this may be due to a background in “hard science”. While researching for my physical chemistry PhD, I did some experiments involving supposedly known reaction rates, prior to heading for the frontiers of science (well, novel research anyway). For one of these, between hydroxyl radicals and bromine atoms, I came up with a value perhaps 10 percent different from the accepted rate. My supervisor checked my numbers, I reran the experiment, and we published the revised figure. Science advanced, as it should.

            Not that many people noticed, since the result was far from earth shattering. If science is to make the headlines, results must be unusual, which in turn makes them unlikely. So if you hope to spot dud science in the media, bear in mind cautionary tales like cold fusion, and carefully check for the extraordinary evidence required to substantiate extraordinary claims.

            Other signs to look for include whether the research was published in a reputable publication such as Nature, or a journal with low entry barriers, or even in something like What Doctors Don’t Tell You, which targets fans of quackery. Ask: does the experimental data really support the conclusion made? Also, importantly: where did the research funding come from?

            Though science should be objective, results may be skewed to support funding organizations. Examples include research sponsored by the tobacco industry. In 2007, US [University of California] Medical Systems announced that researchers had, “Documented for the first time how the industry funded and used scientific studies to undermine evidence linking secondhand smoke to cardiovascular disease.”

            Here in Hong Kong, I believe the “Who pays the piper calls the tune” principle applies to environmental impact assessments, which are paid for by would-be developers. Indeed, when conducting a bird survey for an EIA, I told the proponent that he owned an area of outstanding biodiversity, and afterwards was wryly amused when birds were excluded from the next round of surveys.

            Another point that may seem obvious is whether the experiment had a large enough sample size. I had long believed that drinking coffee or tea is dehydrating, as it seemed to be established common knowledge. Yet lately I was surprised to find this was based mainly on a 1920s study involving just three men, and more recent assessments suggest drinking moderate amounts of coffee or tea hydrate you just as much as water.

            Phew. I’ll drink – tea – to that!

Leave a Reply

Your email address will not be published. Required fields are marked *