Also Available in:

The four dimensional human genome defies naturalistic explanations


Published: 6 October 2016 (GMT+10)
Figure 1: A comparison of the control of transcription in E. Coli (left) with the Linux call graph (right). The bacterial cell is able to control many protein-coding genes (green lines at bottom) with relatively few controls (yellow and purple lines). Linux, while obviously a result of intelligent design, falls far short in that it requires many more high-level instructions to control relatively few outputs. From Yan et al. 2010.1

The human genome is the most complex computer operating system anywhere in the known universe. It controls a super-complex biochemistry that acts with single-molecule precision. It controls the interaction network of hundreds of thousands of proteins. It is a wonderful testament to the creative brilliance of God and an excellent example of the scientific bankruptcy of neo-Darwinian theory. Why? Because the more complex life is, the less tenable evolutionary theory becomes. Super-complex machines cannot be tinkered with haphazardly or they will break. And super-complex machines do not arise from random changes.

I am serious when I compare the genome to a computer operating system. The only problem with this analogy is that we have no computers that can compare to the genome in terms of complexity or efficiency. It is only on the most base level that the analogy works, but that is what makes the comparison so powerful. After millions of hours of writing and debugging we have only managed to create operating systems that can run a laptop or a server, and they crash, a lot. The genome, though, runs a hyper-complex machine called the human body. The organization of the two are radically different as well. A team made up of computer scientists, biophysicists, and experts in bioinformatics (in other words, really smart people) compared the genome of the lowly E. coli bacterium to the Linux operating system (figure 1) and have discovered that our man-made operating systems are much less efficient because they are much more “top heavy”.1 It turns out that the bacterial genome has a few high-level instructions that control a few middle-level processes, that in turn control a massive number of protein-coding genes. Linux is the opposite. It is much more top heavy and thus much less efficient at getting things done. The bacterium can do a lot more with fewer controls. I predict that the study of genomics will influence the future development of computers.

Also, our computers use comparatively simple programs. Programmers talk about “lines of code”. We all learned in math class that a line is a one-dimensional object. So our computer programs are essentially one-dimensional. The human genome operates in four dimensions. This is one of the greatest testimonies to the creative brilliance of God available.

Figure 2: The beginning of the human Y chromosome as seen with the Skittle Genome Visualization Tool.2 In this view, we can see many repetitive DNA elements (the stripes). These repeats might not include “genes”, but they serve to hold the genes in a specific place in 3D space (the 3rd dimension will be discussed below). The large black area is a repeat that the human genome project skipped over (they did not yet have the technology to sequence highly repetitive DNA). There is a lot of information packed into the four letters used to spell out the first dimension of the genome, but this is not even the tip of the iceberg of the total information content in the genome.

The First Dimension: the DNA molecule

The human genome is about 1.8m long. All of it fits into the cell nucleus. To put that in perspective, if you were to make your DNA as thick as a human hair, you would then have more than 50 kilometres of DNA, crunched up into something about the size of a golf ball. Already, we must understand that God is an incredible engineer.

If we were to look at the sequence of letters in the DNA, it might look like this:


That is the first 700 letters of the human Y chromosome. Not very impressive, is it? But if we take that same sequence and replace the four letters with four coloured pixels, you get something that looks like figure 2.

The first dimension of the genome is simply the order of the letters. They spell out genes and those genes tell the cell to do things. This is not really that complicated, but things are about to change.

The Second Dimension: the interaction network

The second dimension of the genome deals with the way one section of DNA interacts with another section. As we have already seen, you can draw the first dimension out easily enough. But if you tried to draw out the second dimension you would first need to draw many arrows connecting different parts of the linear string of DNA. It would be impossible to draw the entire interaction network of the genome, so a small example will have to suffice. Micro RNAs (miRNA) are very small molecules (about 22 nucleotides) that are involved in the regulation of gene function. Figure 3 shows a portion of the miRNA regulation network as it acts on just 13 genes that are upregulated in association with atherosclerosis (hardening of the arteries). These genes are targeted by 262 miRNAs, creating 372 “regulatory relationships”. Not included in figure 3 are the 33 other genes that are downregulated by 295 miRNAs when the body is dealing with this condition. Remember, this is only a small slice of the 2nd dimension of the genome!

Figure 3: A small portion of the microRNA regulatory network serves as an excellent example of the second dimension of the genome. Here, the orange areas represent 13 genes that are upregulated in association with atherosclerosis by 262 miRNAs (green dots with labels) that are, in turn, produced in other parts of the genome. (after Lin et al. 20143)

The second dimension deals with things like specificity factors, enhancers, repressors, activators, and transcription factors. These are proteins that are coded in the DNA, but they move to another part of the genome after they are made and turn something on or off. But there are additional things happening in this dimension. During the process of protein manufacturing, a gene is “read” by the cell during a process called transcription. Here, the DNA is copied into a molecule called RNA. The RNA is then translated into a protein. We have an excellent animation of this process on our multimedia site. But in a process called post-transcriptional regulation, the RNA can be inactivated or activated by other factors (like miRNAs) coded elsewhere in the genome.

The massive, multi-million-dollar ENCODE project revealed something about the genome that we are still trying to fully grasp. One of the greatest mysteries is how only about 22,000 genes can produce more than 300,000 distinct proteins. The answer is that the cell goes through a process called alternate splicing, where the genes are sliced and diced and different parts are used by different cells at different times and under different circumstances to produce the many different proteins. This incredibly complex process is just one part of that second dimension of the genome.

The Third Dimension: 3-D DNA architecture

Figure 4. The 3D positioning of human chromosomes within the nucleus. Genes that are buried deep cannot be easily accessed, so the 3D folding of the chromosomes is incredibly important for overall genome function. (from Bolzer5)

The third dimension deals with how the shape of the DNA molecule affects the expression and control of different genes. We have learned that sections of DNA that are buried deep within the coiled-up DNA cannot be activated easily.4 So genes that are used often are generally easily accessible. Thus, when God wrote out the information in the genome along that one-dimensional strand, He intentionally put things in a certain order so that they would be in the correct place when the DNA was folded into a 3-D shape.

One of the big revelations of the Human Genome Project was that genes that are used together do not necessarily appear near one another in the genome. Claims like “It’s just junk” and “The genome is nothing more than millions of years of genetic accidents” were raised. This did not last long, however, for once people started looking into how the genome is organized in the nucleus,5 they realized that, not only does each chromosome have a specified position in the nucleus, but genes that are used together are generally found next to each other in 3D space, even when they are found on different chromosomes!

The Fourth Dimension: changes to the first three dimensions

The fourth dimension of the genome deals with the way the first three dimensions change of the fourth dimension, time. Yes, you read that right. The shape (3rd dimension), interaction network (2nd dimension), and the sequence of letters (1st dimension) all change. This so far outstrips even our most modern computers that the analogy isn’t even fair any more.

This fourth dimension can be illustrated in several ways. We know that different liver cells have different chromosome counts.6 This is due to the fact that the liver needs lots of copies of certain genes that are involved in metabolism and detoxification. Instead of filling the genome with many copies of these genes, the liver just makes copies of them for its own use. We also know that different brain cells have different number and locations of various transposons.7 These are the “jumping genes” that are thought by evolutionists to be leftovers from ancient viral infections. Problem is, they are vital for the development of the human brain. Did you catch that? The genome dynamically reprograms itself. This is something that computer scientists have long struggled with. How can you make a self-modifying code that does not run out of control? We also know that transposons are critical for controlling embryonic development in the mouse.8 So much for calling them “junk DNA”!


The genome is a multi-dimensional operating system for an ultra-complex biological computer, with built-in error correcting and self-modification codes. There are multiple overlapping DNA codes, RNA codes, and structural codes. There are DNA genes and RNA genes. The genome was designed with a large amount of redundancy, on purpose, by a highly-intelligent being who used sound engineering principles during its construction. Despite the redundancy, it displays an amazing degree of compactness as a mere 22,000 or so protein-coding genes combinatorially create several hundred thousand distinct proteins.

I have a challenge for the evolutionist: Explain the origin of the genome! Charles Darwin wrote in the origin of species:

If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down.

I know this quote has been abused (by both sides of the debate) but let’s think about this for a second. The simpler life is, the easier it is to explain in Darwinian terms. On the other hand, the more complex life becomes, the more intractable a problem it causes for evolutionary theory. We have just seen that the genome is the opposite of simple. This should make all Darwinists very uncomfortable.

I claim the genome could not have arisen through known naturalistic processes. The evolutionist who wants to take up this challenge must give us a workable scenario, including the source of informational changes, an account of the amount of mutation necessary, and a description of the selective forces necessary, all within the proper time frame. They will discover that evolution cannot do what they require, even over millions of years.

By the way, this was a brief summary of the information contained in the DVD The High Tech Cell. If you want more information, I suggest you purchase a copy from our webstore.

References and notes

  1. Yan, K.-K., et al., Comparing genomes to computer operating systems in terms of the topology and evolution of their regulatory control networks. PNAS 107(20):9186-9191, 2010. Return to text.
  2. Seaman, J., and Sanford, J., Skittle: a 2-dimensional genome visualization tool. BMC Bioinformatics 10:452, 2009. Return to text.
  3. Lin, M., Zhao, W., and Weng, J., Dissecting the mechanism of carotid atherosclerosis from the perspective of regulation, International Journal of Molecular Medicine 34:1458-1466, 2014. Return to text.
  4. van Berkum, N.L., Hi-C: a method to study the three-dimensional architecture of genomes, Journal of Visualized Experiments 6(39):1869, 2010. Return to text.
  5. Bolzer, A., et al., Three-dimensional maps of all chromosomes in human male fibroblast nuclei and prometaphase rosettes, PLoS Biol 3(5):e157, 2005. Return to text.
  6. Duncan, A.W., et al., The ploidy conveyor of mature hepatocytes as a source of genetic variation, Nature 467:707-710, 2010. Return to text.
  7. Baillie, J.K., et al., Somatic retrotransposition alters the genetic landscape of the human brain, Nature 479:534-537, 2011. Return to text.
  8. Tomkins, J., 2012. Transposable Elements Key in Embryo Development; icr.org/article/6928, July 25, 2012. Return to text.

Readers’ comments

L. B.
Thank you for this article. The quality of these resources makes you proud of being a Christian and Creationist. I confirm that a line of code can handle multidimensional data; but the line in the file has indeed probably only one dimension. Some compare a line of code to a step/line in a mathematic demonstration: does the concept of steps have a dimension? Russell Humphreys might have something to say on this matter! You wrote “The genome dynamically reprograms itself. This is something that computer scientists have long struggled with. How can you make a self-modifying code that does not run out of control?” When I started writing softwares at the beginning of the eighties, computers were much slower and had much less memory (all kinds of memory) than now. Therefore you had to use specific programming skills; all of them seem present in the DNA. One of these skills was self-modifying code. Self-modifying programming was mastered by a limited number of programmers, because you need 1) the ability to think recursively 2) typically the ability to master machine codes 3) therefore to think high level and low level at the same time 4) to know that even a very slow computer executes your code much faster than your brain 5) be rigorous because as you wrote, 5.1) a tiny error of 1 bit 5.2) derails 5.3) easily 5.4) the whole computer and 5.5) fast; 6) others. But it allowed to write bigger and faster software than “standard” programming. That’s why a massive self-modifying code as the DNA can only point to an amazingly skilled Designer! Glory to Him! The complexity of Self-Modifying Programming implies also many qualities of the DNA are yet to be discovered. I look forward to learn more on this topic!
Ray N.
Interesting article. As a control systems engineer, I deal with both SOFTWARE and HARDWARE. The software can be coded to control the hardware. We can put the hardware under the microscope but we can't see the software. But we know that software exist by watching the behaviour of the hardware. E.g. smartphone, the screen colours changes according to the software controlling it. Evolutionist needs to explain how the hardware accidentally came together, and is pre-installed with the proper OS to control it. Do you think an iPhone 7 (2016) would work if it came pre-installed with Android Cupcake (2009)?
I find this multidimensional complexity of genomes fascinating. The complexity described is staggering, and I very much enjoyed watching Dr Carter's creation conference talk about it online a few years back.

However as a computer programmer and Linux kernel developer, I find myself rather sceptical of the significance of the different balances when comparing genomes to computer software, especially how it allegedly relates to efficiency (which I notice reference 1 does not actually mention).

In C code, any function could be getting useful work done, regardless of whether it calls other functions. The call graph says nothing of the size of the functions or in which functions the most time / resources are actually being expended.

In fact whether a function calls others at the level the study analysed (compiled machine code) depends partly on to what extent the compiler/programmer inlines the functions called, itself a balance between performance (avoiding the overhead of function call/return and allowing more compiler optimisation to take place) and code size (avoiding increased instruction cache pressure due to duplication of machine code). Neither extreme is optimal.
Robert Carter
I am looking forward to people like you telling us more about how the genome works and coming up with better technological analogies to that end!
Peter H.
Excellent article. Very interesting. However, as a computer programmer (Oracle) I can't agree that computer programming is truly one-dimensional any more. In a few lines of code I can create and manipulate a multi-dimensional array of data. Having said that, I completely agree that any system man is able to create (even if multi-dimensional and multi-layered) comes nowhere near the amazing complexity and robustness of the design built into the DNA mechanisms described by Dr. Carter. Brilliant exposition.
Jon D.
Dr. Carter, fantastic work on this article! Absolutely mind-blowing material, yet I love the comparison to a computer OS, and the visualization of the genome curled up in the nucleus with different sections more exposed and accessible than others. Being more of a math/physics person (we met in Rochester MN last weekend!), I very much appreciated your explanations and figures, as they provided an amateur such as myself a firm grasp of the main ideas of such a technical subject. It sure makes our great God look pretty intelligent. I've been saying for years now that, with all we now know, in my opinion it requires more faith to believe in evolution than it does to believe in Intelligent Design. This material is a perfect illustration of that outlook. I wonder what an evolutionist's response would be, and how they might answer the questions to pose to them in your conclusion. And, I think I might need to get a copy of The High-Tech Cell!
Robert Carter
Possible responses include: incredulity, anger, claims that we simply 'do not understand' science, appeals to Gaia, appeals to 'emergent' complexity as a basic property of the fabric of the universe, and appeals to aliens, none of which are intellectually satisfying, which makes our opponents even more angry.
Brian W.
Fascinating article. Indeed, evolution is as impossible as falling UP an entire flight of stairs 100 stories high! You could fall down this hypothetical flight of stairs, but never UP them.
Dan M.
Hi Dr. Carter
We met a couple of weeks ago in coral springs (wheelchair guy).
Your field of study in my opinion is the slam-dunk for proof of a creator God. It brings to mind Rom 1:20-22 and how the evolutionists must constantly push the obvious knowledge of God from their minds. It must be like a nagging headache (no wonder they're so angry at us all the time)!
No one in their right mind would think the supercomputers of the day could assemble themselves, no matter how long allowed and as you said they are rudimentary by comparison to the genome!
I look forward to more articles about genetics as we discover more details of God's creative genius.
The human race is so proud yet we know so little! What a rebellious stiff necked bunch we are!

Psalm 139:14 "I will praise You, for I am fearfully and wonderfully made; Marvelous are Your works, And that my soul knows very well!"

I thank God for your ministry and pray for you often.
King T.
Thank you for making the complexity clear in a structured/layered manner. it helps one to formulate an articulate explanation of what is meant by complexity in the cell.
I am always fascinated by the animations that show how proteins are constructed and folded. The one thing that I've not been able to grasp so far is just how the various bits of ingredients get summonsed to partake in that construction. How does the process "attract" the required ingredient at the right time, quantity and position for inclusion? I'd very much like to have an answer to that question.
Robert Carter
Many people would like to know the answer to that! If the cell depends on passive diffusion (a molecular 'random walk') it would take much too long to get anything done. Yet we don't know of any structure or series of chaperones that help to guide the millions of little parts constantly moving to and fro in the cell. Even so, cellular processes happen at lightning speed.

I suspect that the internal space of the cell is organized in such a way that each and every part has a specific channel in which to move and that these channels are efficiently organized and directly lead to the target destination. But this is a very speculative hypothesis.
Alex W.
Great article.
In Andreas Wagner's book 'Arrival of the fittest' he shows that complex networks can be tinkered with in 'astronomically large' numbers of ways, just one change per step, without destroying the functionality of the whole.
However, he also showed that simple networks cannot be changed at all because every link in the network is vital to the overall function. That rules out all possible Darwinian simple-to-complex origin arguments for biological control networks.
Phil M.
Excellent article. I have a question, which is a bit of a side-issue to the theme of the article itself. But I notice you make the statement “I claim the genome could not have arisen through known naturalistic processes”. Could you please define what you mean by the term “naturalistic processes”? I take it you are differentiating between “natural processes” (e.g. metamorphosis, photosynthesis, neutralisation, etc.,) and your use of the term “naturalistic processes”.
I am fully aware (as you state) that you do not believe "naturalistic processes" actually exist or that the genome (and indeed life itself) arose by chance or random means. So if by “naturalistic processes” you mean that, to occur, they would have to incorporate a form of magical randomness or chance that defies all credibility, then calling them "processes” is a misnomer. We know the occurrence of any real process (natural or otherwise) may be kick-started by a random event, but once triggered, natural processes do not incorporate random events to attain their end-product or final result. By definition, events in a process (any process) – both their sequence and their outputs – are pre-determined. So when you attach the term “processes” to the term "naturalistic” what are you unintentionally implying? That naturalistic processes, if actually known, would not incorporate a form of magical chance or randomness and would be pre-determined as far as sequence of events and outputs are concerned? If so, the concept of “naturalistic processes” becomes credible.
Robert Carter
By "naturalistic" I am referring to naturalism -- the belief that everything that has ever happened, is happening now, and will ever happen in the future, throughout the universe, can be explained by natural processes alone. What does a blind nature give us? Randomness. The challenge for the evolutionist is to explain life according to dumb luck.

You are correct that a "process" has a predetermined output and that nature has nothing like "predetermination".

But is it then a misnomer to combine "natural" and "process"? Not if it helps to illustrate the absurdity of their position. I could have put the word process in 'scare quotes' or emphasized it with italics, but this is unnecessary. It is enough to highlight the fact that the best idea they have to offer is radically insufficient to explain the data.

Comments are automatically closed 14 days after publication.