I heard an interesting lecture today by Richard Bonneau about his work using machine learning to analyze genetic codes and also to understand interactions between the proteins they express. He said many fascinating things, and my head is still reeling from all of the exciting ideas and possibilities.
But one thing in particular struck me, from my perspective as somebody who programs computers. He talked about the rich multiple interconnections between different parts of how our biology functions at the molecular and cellular level.
The redundancies built into the interactions between these various components creates a very robust system. One effect of this is that our genetic code is remarkably resistant to damage.
In other words, the great majority of gene mutations turn out to be harmless. Thinking of the cybernetic equivalent, imagine that your computer program was modified by somebody flipping random bits in its binary image.
How many bits would need to flip before the program stopped working? In most cases, even one bit would suffice to crash the program.
But in the biological equivalent, you can flip lots of bits and the whole thing continues to run just fine. Which led me to start thinking about the following hypothetical problem:
Could you design a computer program that would be resistant to random mutations? In particular, could you design it so that many bits in its binary image could be randomly flipped, and the program would continue to run?
We can define a program’s “robustness” as the number of bits you would need to flip, on the average, before the program breaks. So how to you make a program robust?
One problem is that it would be very easy to cheat. You could just make a million copies of the program, and then toss out whichever parts of any copies don’t match the corresponding parts of other copies.
So let’s be more specific: Is there a way to minimize the size of a computer program for any given degree of desired robustness?