Journal of Creation 25(2):10–11, August 2011
Browse our latest digital issue Subscribe
How scientific is our science?
According to Jonah Lehrer,1 there is a serious problem with much of our scientific research—the ‘decline effect’. Increasingly, many ‘well-established facts’ are coming into question. Despite the original data sets indicating the validity of certain findings, even ‘beyond reasonable doubt’, subsequent attempts to replicate these results are failing. New antipsychotic drugs, originally hailed as offering dramatic improvements in patients’ symptoms, seem to have lost their power; other therapies, ranging from cardiac stents to Vitamin E treatments appear to be waning in their effectiveness; the efficacy of some antidepressants appears to have declined threefold in recent decades. “It’s as if our facts were losing their truth”, he argues, and “claims that have been enshrined in textbooks are suddenly unprovable… it’s occurring across a wide range of fields, from psychology to ecology.” Lehrer is clearly not alone in expressing such concerns. Recently, John Ioannidis, Professor of Medicine at Stanford University, published a paper entitled, Why most published research findings are false.2
Lehrer is probably right in saying that, in most cases, the problem is not deliberate fraud. Sometimes, the most plausible explanation is that the original sample size was just too small and, when the experiment is repeated and the effects of randomness are mitigated, a truer result is obtained. In other cases the problem is the design of the experiment. The apparent decline in the effectiveness of antipsychotics, for example, could be attributed to the choice of subjects—those suffering milder forms of psychosis might be less likely to demonstrate dramatic improvement than the more serious cases. Sometimes, so little is known about the subject being investigated that the factors which determine the result are not even known. Consequently, the data yielded cannot be related to the parameters being tested as they are really a by-product of ‘invisible’ variables which are not understood. What was of particular interest in Lehrer’s article, however, was its emphasis on the ‘human aspects’ of the problem. According to Lehrer, these include selective reporting of results, publication bias by journal editors, fashions and illusions nurtured by a priori beliefs.
Darwin’s theory of sexual selection
In the early 1990s, the Danish ornithologist Anders Møller published a number of papers providing data supporting Darwin’s theory of sexual selection.3,4 Female barn swallows, he claimed, preferentially mated with males having long, symmetrical feathers. Since there appeared to be a correlation between the genetic quality of the bird and the symmetry and length of its feathers, he argued that this confirmed the view that feather ornaments in birds arose through an evolutionary process. Aesthetics was really about genetics.
Lehrer reports that, following the publication of Møller’s work, a number of scientists published data supporting his findings. Not only did some confirm the role of symmetry in sexual selection in barn swallows, but others demonstrated the principle with fruit flies and even humans. Researchers found that women preferred the smell of more symmetrical men. Anthropologists at Rutgers found that more symmetrical men were consistently rated as better dancers. Within a few years, however, all the hype died down, as more studies were conducted which cast doubt on Møller’s conclusions. Some even became very critical of him.5 Why, then, did Møller’s idea become so popular so quickly?
Publication bias and selective reporting
Evolutionary biologist Professor Leigh Simmons of the University of Western Australia was one of many who became enthusiastic about the symmetry theory and started to conduct his own experiments. However, he failed to find the effect. Remarkably, when he submitted his results to the scientific journals, he had difficulty getting them published. “[They] only wanted confirming data” he said, “It was too exciting an idea to disprove.”1 Lehrer comments, “For Simmons, the steep rise and slow fall [of the symmetry theory] is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research; after a new paradigm is proposed, the peer review process is tilted towards positive results.”1
Professor Michael Jennions, a biologist at the Australian National University, noted the ‘decline effect’ in papers dealing with ecology and evolutionary biology—many of the theories seemed to fade into irrelevance. Jennions postulated a number of reasons for this, including publication bias against non-significant results, and paradigms generating ‘bandwagon effects’.6 According to Professor Richard Palmer, a biologist at the University of Alberta, a good deal of the ‘decline effect’ can be explained by the selective reporting of results.7 He commented, “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a priori beliefs often repeated.” Even the act of measurement is vulnerable to all sorts of perception biases, he argued. Indeed, he remarked, “That’s just the way human beings work.”1
A good example of selective reporting relates to studies of acupuncture. While in some Asian countries acupuncture is widely accepted as a medical treatment, in the West it is viewed with more scepticism. These cultural differences appear to influence the results of clinical trials. In forty-seven studies conducted in China, Taiwan and Japan, every single trial concluded that acupuncture is an effective treatment. However, in ninety-four trials conducted during the same period in the United States, Sweden and the UK, only fifty-six per cent found a benefit. Palmer notes, “this wide discrepancy suggests that scientists find ways to confirm their preferred hypothesis, disregarding what they don’t want to see. Our beliefs are a form of blindness.”1
Professor Ioannidis would agree. “It feels good to validate a hypothesis,” he says, and “it feels even better when you’ve got a financial interest in the idea or your career depends upon it. And that’s why, even after a claim has been systematically disproven, you still see some stubborn researchers citing the first few studies that show a strong effect. They really want to believe that it’s true.”1
Science and the paradigm
Lehrer’s article confirms what biblical creationists have been saying for many years. Much of what passes as science is far from objective and unbiased. Ruling paradigms will often dictate the interpretation of data, which might otherwise be understood as indicating something quite different. Common anatomy obviously points to a common ancestor if you are an evolutionist, but to a common designer if you are a creationist. Comparison of protein sequences provides either evidence for evolution or evidence against evolution, depending on how the data is presented.8 People will believe what they want to believe, and data will be viewed accordingly.
One study reported in the Journal of the American Medical Association showed that, of thirty-four tests that had been subjected to replication, the results of forty-one per cent of them had either been directly contradicted or had been shown to be significantly exaggerated.9 If this is true for controlled experiments—where tests can be carefully replicated by others—how much more will it be true of evolutionary speculations about what happened many years outside of living memory, and for which there is no possibility of replication? And if this is true of matters relating to daily work, how much more will it be true of issues that have profound implications for what people believe about themselves and how they may behave? Many people do not want to believe in a creator as they dislike the idea that they will be held accountable for their actions. Believing the theory of evolution, rather than the biblical account of creation, enables them to justify their desire to live as they please.
For many, it is a fact that there is no creator and there was no supernatural creation. If so, then life must have arisen through natural processes. When they learn that no scientist can explain how the first life forms could have evolved from inanimate matter, they simply assume that this question will be answered one day through further research. When they learn that the fossil record is characterised by the general absence of clear transitional forms (rather than the presence of countless transitional forms forming many obvious chains, which would be expected if evolution were true), they simply believe that for some reason the transitional forms were never fossilized. When they learn about the indescribable complexity of the genome and the cell, or listen to a great concert pianist, or see the paintings of a brilliant artist, they simply marvel at what evolution can do. The scientific basis for such beliefs can come later—after all ‘it is a fact’ that evolution did it.
- Lehrer, J., The truth wears off: is there something wrong with the scientific method?, The New Yorker, 13 December 2010; http://www.newyorker.com/reporting/2010/12/13/101213fa_fact_lehrer?currentPage=all. Return to text.
- Ioannidis, J.P.A., Why most published research findings are false, Plos Medicine 2(8):e124, 30 August 2005; http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124. Return to text.
- Møller, A.P. and Hogland, J., Patterns of fluctuating asymmetry in avian feather ornaments: Implication for models of sexual selection, Proceedings of the Royal Society 245(1312):1-5, 22 July 1991; http://rspb.royalsocietypublishing.org/content/245/1312/1.abstract. Return to text.
- Møller, A.P., Female swallow preference for symmetrical male sexual ornaments, Nature 357: 238-240, 21 May 1992; http://www.nature.com/nature/journal/v357/n6375/abs/357238a0.html. Return to text.
- Borrell, B., A fluctuating reality, The Scientist 21(1):26, 1 January 2007; http://www.the-scientist.com/article/home/39386. Return to text.
- Jennions, M.D. and Møller, A.P., Relationships fade with time: a meta-analysis of temporal trends in publication in ecology and evolution, Proceedings of the Royal Society, published on-line 4 December 2001; http://www.anu.edu.au/BoZo/jennions/publications/Paper33.pdf. Return to text.
- Palmer, A.R., Detecting publication bias in metaanalysis: a case study of fluctuating asymmetry and sexual selection, Am. Naturalist 154(2):220-233, August 1999; http://www.jstor.org/pss/10.1086/303223. Return to text.
- Statham, D.R., Evolution: good science? Exposing the ideological nature of Darwin’s theory, ch. 7, Day One, UK, 2009. Return to text.
- Ioannidis, J.P.A., Contradicted and initially stronger effects in highly cited clinical research, JAMA 294(2):218–228, 2005; jama.ama-assn.org/cgi/reprint/294/2/218.pdf. Return to text.
Comments are automatically closed 14 days after publication.