Mutations, and why you shouldn’t marry your cousin
Published: 12 August 2017 (GMT+10)
Some of the most interesting questions our ministry receives deal with the subject of genetics. It is an exciting, relatively new field of science, and many people want to know how it impacts our understanding of creation. Today we feature two genetics questions with responses from Dr Robert Carter, CMI-US
Milla R. from Australia writes:
Hello. I read your interesting article about interracial marriage, and I read about how you mentioned that interracial couples’ kids are likely to have greater genetic variety, then kids of couples of the same ethnic groups. Does this mean that it’s wrong for two people of the same ethnic group to marry?
Thanks and God bless.
Dr Robert Carter responds:
An excellent question!
The answer is, “No.”
Here’s the explanation: Let’s say a recessive mutation exists in one out of every thousand people in a population (this is an extreme example, most mutations are not nearly that common). If two random people marry, we can factor in the chance that they both carry the mutation and the chance that they will both pass the mutation to the child:
1⁄1,000 × 1⁄1,000 × ¼
Thus, the probability of a child being born with that mutation = 0.00000025, or 0.25 children out of every million. That number is approximately zero.
We can calculate these probabilities for any degree of relatedness. For example, if an uncle marries his niece (or an aunt her nephew), there is a 25% probability (¼) that any mutation carried by one is also carried by the other. Thus, 1⁄1,000 × ¼ × ¼= 0.0000625, or 6.25 out of every 100,000. The risk is 250 times greater than if two unrelated people marry. Of course, we’re still only talking about 0.00625% of babies from uncle/niece marriages would be affected, even with this extreme example.
For any particular mutation, the risk of an inbred child carrying it is relatively low. But there are many thousands of deleterious mutations circulating within every population. Thus, the true inbreeding risk is significantly greater than the risk for a single mutation.
If two people from very different racial backgrounds marry, the chance that both carry the same recessive mutation is much lower, so the probability of a child carrying two copies of a deleterious recessive mutation is extremely low. Yet, so was the probability from two random people in the same population marrying.
Take home point: just don’t marry your cousin.
Thanks for the enjoyable mathematical exercise. I hope it made sense to you.
Bill H. from the United States writes:
Dear Dr Sanford. My wife and I are reading your new book and we are enjoying it immensely. I am also debating a biologist who disagrees with your mutation rate calculations and cites the following article as his reference: ncbi.nlm.nih.gov/pmc/articles/PMC3276617/
I’m not a biologist but a retired engineer. Any help you can give me regarding this would be most appreciated.
Dr Robert Carter responds:
I am answering for Dr Sanford since he is not employed by CMI.
Ah yes, Peter Keightley.1 I follow his work and have learned lots from him. Here, however, I think he is applying too much wishful thinking to his studies. First, the true mutation rate is still unknown. Older estimates were higher than some of the more recent ones. Some of the lower values come from studies performed on the Icelandic population. That country decided to jump feet-first into the world of genomics and we thus have a LOT of data for Icelanders. Problem is, this is a small, inbred country and it is not at all clear that their results can be generalized to the entire world. It is doubly unclear if they can be generalized though all of human history. But there are other sources of low mutation rates he could be using.2 Personally, I have been very frustrated by the information being provided by sequencing programs. Take the 1000 Genomes Project. They sequenced more than 2000 people, but the genomes are ‘low coverage’, in other words ‘high error rate’. One cannot use their data to measure the real mutation rate because the error rate approximates the expected mutation rate. This is something that I very much want to know and the fact that he assumes he has an accurate estimate is more than a little annoying. There is more ‘art’ than ‘substance’ here.
Second, he says the rate from these studies is lower than the one estimated by evolutionary assumptions. This is very convenient for him. Everything in the past was suggesting that the real, measurable rate was much higher than evolutionary rates. Also, they have pushed back the human-chimpanzee split from 3 million years ago (when I was in graduate school just over a decade ago) to more than 6 million years ago, and some are arguing for more than that. This directly affects the evolutionary ‘rates’. So they cite the lower mutation rates while failing to mention the many caveats associated with the estimates. The real rate is an unresolved issue, and the fact that they want it as low as possible should make us suspicious.
Third, notice how he bases his estimates on “a method proposed by Kondrashov and Crow”, which we might construe as an ‘appeal to authority’. They, in turn, are making assumptions about the “mean selective constraint per site”. This is a hugely controversial issue. Sanford has taken great pains to measure the selection coefficient. I am not familiar enough with Kondrashov and Crow’s method to compare the two, but I am certain they are making assumptions in favor of evolution.
He then admits that his final, downward-skewed rate is 2.2 deleterious mutations per generation. This is 20 times higher than Haldane’s worst case estimate that would guarantee human extinction, and Haldane’s estimate is woefully low.3 To solve this, Keightley appeals to unknowns, specifically different models of selection (which have been studied with Mendel’s Accountant) or synergistic epistasis (which the link will show does not do what he thinks it does). I know this was not the focus of his paper, but he still has not conquered the ultimate bugaboo: the increase of beneficial mutations required by evolution.
Another paper was recently published that tries to get around these issues.4 This time, the author uses the assumption of evolutionary time to ‘prove’ that much of the genome is junk. He assumes that, since we have been around as a species for a very long time, and since we are not extinct, it must be true that most new mutations strike in the ‘junk’ portion of the genome. This is a clear case of circular reasoning. Since human females cannot have enough babies to provide enough fodder for natural selection to remove all the bad mutations from our population, the only other solution to his mind is that the genome cannot be as complex as modern science has demonstrated. He cannot accept that the human species might not be as old as he believes.
Interestingly, he takes time to slam creationists in this secular science article, but does not tell the truth on two counts. First the ‘creationist’ he cites is Francis Collins, who accepts all tenets of neodarwinian evolutionary theory, including deep time and a common ancestry of humans and chimps. Second, he says that “creationists such as Francis Collins” believe the genome is 100% functional. Yet in the source he cites, Collins is only saying that the term ‘junk DNA’ is no longer used (this is a complete turn-about for Collins, by the way, who once used this often as a major argument against design). Also, ‘creationists’ do not believe the human genome is 100% functional. There could be all sorts of non-coding spacer material in there, for example, but it depends on how one defines ‘information’ and ‘functionality’. However, once the amount of functional DNA gets above a certain, small threshold (maybe ≥ 5%), it is the evolutionists who will be having trouble, because they could no longer explain away enough of the mutations—too many would be occurring in the functional portions. In fact, that threshold has already been surpassed, which is why we are seeing so much resistance from them to accept the facts.
In summary, while Keightley is a well-known scientist and does decent work in many areas, in this case he is clearly attempting to wish away the problems he knows exist. He knows the mutation rate is too high and appeals to unknowns to solve the problem.
What do you tell your friend?
- The estimate is based on many, many assumptions.
- The estimate is still too high.
- Appealing to unknown processes is wishful thinking. Worse, those unknown processes are not actually unknown. They have been studied, and quantified, and they fail to solve the problem. Worse, the real problem of how to make people better over time is not even being discussed here.
Dr John Sanford responded further:
The Keightley paper says that the point mutation rate (designated ‘u’) is 70 point mutations per person per generation. This is in the ball park of what I have been saying all along. But he also calculates the rate of deleterious (non-neutral) mutations (designated ‘U’). He comes up with 2.2 deleterious mutations per person per generation. This is simply because he is assuming most mutations are neutral, in other words he believes most mutations arise in the ‘junk DNA’ portion of our genome. The recent ENCODE findings show that most of the genome is functional, but Dr. Keightley is in denial of this new understanding, and so assumes only a few percent of the genome is functional.
For various reasons, I think 70 is still on the low end. There are numerous papers now saying 100, including Lynch’s newest paper.5 But even if the mutation rate was 2.2, my arguments hold.
References and notes
- Keightley, P.D., Rates and fitness consequences of new mutations in humans, Genetics 190:295–304, 2012. Return to text.
- He cites a value of 1.1x10-8 mutations/bp/generation. Multiplying by the approximately 3 billion nucleotides in the genome, then accounting for the fact that each cell has 2 copies of the genome yields 1.1x10-8 × 3x109 × 2 ≈ 70. Return to text.
- Rupe, C.L., and Sanford, J.C., Using numerical simulation to better understand fixation rates, and establishment of a new principle: Haldane’s Ratchet, Proceedings of the Seventh International Conference on Creationism. Pittsburgh, PA: Creation Science Fellowship, 2013. Return to text.
- Graur, D., An upper limit on the functional fraction of the human genome, Genome Biol Evol evx121, 2017; doi.org/10.1093/gbe/evx121. Return to text.
- Lynch, M., Mutation and human exceptionalism: our future genetic load, Genetics 202:869–875, 2016. Return to text.
I really need to get familiar with the terms lately. The term _synergistic epistasis_ is very important to me regarding the issue of so called beneficial mutations. I'm going to study in the field of biochemistry in molecular genetics and these issues will be of great resource to me.
Regarding mutations and the estimates, it struck me as impressive that even for 2,000 people with low coverage, we can still get high error rates. It's understandable how scientists like Keightly would want to rely on them but I also understand Carter's suspicions. For example in analytical chemistry, the professor told us a sort of disclaimer that, usually, the less the sample size, the more the error rate is present, but also told us factors that might increase the error rate in spite of sample size, and that's the instrumentation error and the chemical compounds used in doing an analysis. For biochemistry it's worst because, for such a relatively great sample like 2,000 people, but "low coverage", it seems that low coverage, for the study's context, would mean they did not cover enough ethnicities to make proper accounts of mutation estimates, and thus it suffers a sort of bias and thus resulting in high error rates. The other problem one must also solve is, you guessed it, money. The more efficient and accurate the instrumentation is, the greater the price. But this means the demand for accuracy and precision is two fold and a single mistake will be fatal. This is why highly experimental sciences are difficult in my opinion, very demanding of accuracy, especially for important studies such as mutation rates in a population. So Carter's suspicion is very well justified aside from the clear evolutionary assumptions.
Thank you for the positive comments. Clarification: 'low coverage' has to do with the amount of sampling per person (how many 'reads' per letter of the individual's genome) not the amount of sampling per population.
A 2014 ENCODE report, updating the 2007 pilot project report, said: "In agreement with prior findings of pervasive transcription [ref. 2007 pilot report], ENCODE maps of polyadenylated and total RNA cover in total more than 75% of the genome." www.pnas.org/cgi/doi/10.1073/pnas.1318948111
They also say that this could be an underestimate of the proportion of the human genome that is functional and give lots of reasons why they don't yet know (basically, they still don't know how it all works).
Strangely, they also say "Presently, ∼4,000 genes have been associated with human disease, a likely underestimate given that the majority of disease-associated mutations have yet to be mapped. There is overwhelming evidence that variants in
the regulatory sequences associated with such genes can lead to disease-relevant phenotypes." Strange, because the Human Gene Mutation Database lists over 200,000 known mutations that cause heritable disease [[link deleted per feedback rules]].
It seems to me there are a lot of professionals out there who don't want to face up to the reality of deleterious mutations, genome decay, and the failure of neo-Darwinism as a way of understanding how life works.