Poisson Nachman Crowell

Even aside from Basener and Sanford, others including Nobel Prize winner Hermann Muller pointed out the human race cannot tolerate very many mutations per individual per generation. The number Muller arrived at was about 1 bad mutation per generation per individual as the limit the human genome can tolerate.

Additionally, so what if an individual has a good mutation if he has 10 bad to go with it. This is like have a slight increase in intelligence while having 10 heritable diseases to go with it. You go one step forward and ten steps back.

Can natural selection arrest the problem? Only if there are enough reproductive resources relative to the number of offspring per couple.

For human populations there was something published by Nachman and Crowell and Eyre-Walker and Keightley using a Poisson distribution as reasonable model for the probability of a eugenically clean individual appearing in the face of various mutation rates.

If it is improbable that an eugenically clean kid can be reproduced by a couple, this makes it hard to weed out the bad. So this is an alternative way to arrive at Muller’s conclusions, which are also Sanford and Basener’s conclusions, and really everyone else’s conclusions as summarized by Dan Gruar: “If ENCODE is right, evolution is wrong.”

This is a simpler argument than the one Basener and Sanford put forward, but to Sanford’s credit, he’s also put the simpler version in his book Genetic Entropy, although the following derivation isn’t in his book, it’s something I ginned up myself. 🙂

So how can we estimate the probability a kid can be born with no defective mutations?

The following derivation was confirmed in Kimrua’s paper (see eqn. 1.4)

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1211299/pdf/1337.pdf

which Nachman and Crowell, and Eyre-Walker and Keightley reference as well.

So now the details:

let U = mutation rate (per individual per generation)
P(0,U) = probability of individual having no mutation under a mutation rate U (eugenically the best)
P(1,U) = probability of individual having 1 mutation under a mutation rate U
P(2,U) = probability of individual having 2 mutations under a mutation rate U
etc.

The wiki definition of Poisson distribution is:

\huge f(k,\lambda ) = e^{-\lambda }\frac{\lambda^k }{k!}

to conform the wiki formula with evolutionary literature let

\lambda = U

and

f = P

Because P(0,U) = probability of individual having no mutation under a mutation rate U (eugenically the best), we can find the probability the eugenically best individual emerges by letting:

k = 0

which yields

\large \large P(k,U) = P(0,U) = \frac{U^0 e^{-U }}{0!} = e^{-U}

Given the Poisson distribution is a discrete probability distribution, the following idealization must hold:

\large \sum_{n}P_n =\sum_{i=0}^{\infty}P(i,U) = 1

thus

\large \large P(0,U) + \sum_{i=1}^{\infty}P(i,U) = 1

thus subtracting P(0,U) from both sides

\large  \large P(0,U) + \sum_{i=1}^{\infty}P(i,U) -P(0,U) = 1 - P(0,U)

thus simplifying

\large \sum_{i=1}^{\infty}P(i,U) = 1 - P(0,U)

On inspection, the left hand side of the above equation must be the percent of offspring that have at least 1 new mutation. Noting gain that P(0,U) = e^{-U}, the above equation reduces to the following:

\sum_{i=1}^{\infty}P(i,U) = 1 - P(0,U) = 1- e^{-U}

which is in full agreement with Nachman and Crowell’s equation in the very last paragraph and in full agreement with an article in Nature: High genomic deleterious mutation rates in homonids by Eyre-Walker and Keightley, paragraph 2.

http://www.lifesci.sussex.ac.uk/CSE/members/aeyrewalker/pdfs/EWNature99.pdf

The simplicity and elegance of the final result is astonishing, and simplicity and elegance lend force to arguments.

So what does this mean? If the bad mutation rate is 6 per individual per generation (more conservative than Gruar’s estimate if ENCODE is right), using that formula, the chances that a eugenically “ideal” offspring will emerge is:

\large \large P(0,6) = e^{-6} = 0.25\%

This would imply each parent needs to procreate the following number of kids on average just to get 1 eugenically fit kid:

\frac{1}{e^{-U}} =  \frac{1}{e^{-6}} = 403.42

Or equivalently each couple needs to procreate the following number of kids on average just to get 1 eugenically fit kid:

\large \large 2 * \frac{1}{e^{-U}} = 2 * \frac{1}{e^{-6}} \approx 807

For humanity to survive, even after each couple has 807 kids on average, we still have to make the further utterly unrealistic assumption that the eugenically “ideal” offspring are the only survivors of a selective process.

Hence, it is absurd to think humanity can purge the bad out of its populations — the bad just keeps getting worse.

In truth, since most mutations are of nearly neutral effect, most of the damaged offspring will reproduce, and the probability of a eugenically ideal line of offspring approaches zero over time.

Muller’s number of only 1 new bad mutations per generation per individual. So if anything I understated my case.

There are some “fixes” to the problem suggested by Crow and Kondrashov. I suggested my fix. But the bottom line is to look at what is actually happening to the human genome over time. Are we getting dumber and sicker? I think so. It’s sad.

We can test Basener and Sanford’s prediction by observing whether human heritable diseases continue to increase with each generation. Whether their derivation is right or not, some of their conclusions are observationally and experimentally testable.

On some level, I suppose even Basener and Sanford wished it were not so because it is a tragic conclusion.

One thought on “Poisson Nachman Crowell

  1. Joe,

    Thanks for your response. First off, I agree from the standpoint of Fisher’s actual theorem, it doesn’t seem to be reflected explicitly in development neo-Darwin/moder synthesis theory.

    I read some of the references by Edwards (whom you recommended). Edwards said in:
    R.A. Fisher’s gene-centred view of evolution and the Fundamental Theorem of Natural
    Selection

    As an aside we may note that, just as Fisher defined the technical meaning of the word variance in the 1918 paper, so the first recorded occurrence of the word covariance is in The Genetical Theory (1930b, p. 195). Surprisingly, the above theorem did not appear explicitly in the literature until Robertson (1966). A more general form making allowance for changes in the character values themselves was given by Price (1970, 1972a).

    It seems that only during the writing of The Genetical Theory did Fisher then realise that fitness itself could be considered as the character under selection, so that, since fitness is perfectly correlated with itself, the Fundamental Theorem emerges: the rate of change in fitness ascribable to
    gene-frequency change is equal to the genic variance.

    But I think a charitable reading of Basener Sanford (2017) would permit the claim Fisher was first to link Darwinism and Mendelism based on Fisher’s 1918 paper:
    https://en.wikipedia.org/wiki/The_Correlation_between_Relatives_on_the_Supposition_of_Mendelian_Inheritance

    The not-so fundamental Fundamental Theorem came later in the 1930 paper. I scoured the earlier edition your book Theoretical Evolutionary Genetics to actually find a formula stating Fisher’s theorem, and that’s why I noticed it’s complete absence from earlier editions of your book, so I know from that, it is definitely your view Sewall Wright’s formula was foundational, not Fisher’s.

    It is understandable one might think Basener Sanford 2017 claim Fisher’s formula was foundational, but I can attest I appraised them in 2016 and thereafter of our discussion in December 2015 where you said Fisher’s theorem was not so fundamental:
    http://theskepticalzone.com/wp/absolute-fitness-in-theoretical-evolutionary-genetics/#comment-99127

    So understandably I read the meaning of the paper differently than you would! But their choice of words and the meaning of what they are saying is worth clarifying. I leave that discussion between you Michael Lynch and Bill and John….

    But, backing up a bit, my understanding is that according to Queller 2017, there is a move (including Michael Lynch and Walsh) to create the hierarchy depicted below.

    Also, while backing up a bit, there are three formulas. One is what I call the Bonkers Formula which Gruar apparently used to argue “If ENCODE is right, evolution is wrong.” This applies to recombining diploid populations like humans.

    \sum_{i=1}^{\infty}P(i,U) = 1 - P(0,U) = 1- e^{-U}

    I derived it here:
    http://www.creationevolutionuniversity.com/science/?p=22

    The way I interpret the Bonkers Formula is that it is relatively independent of the mutation/selection balance formula. This formula would take precedence over the mutation/selection balance formulas since hypothetically, from a medical standpoint, we can have harmful traits that have neutral to “beneficial” selection coefficients. Thus the ratio of “deleterious” to “beneficial” is moot if the absolute number of deleterious traits is high enough. It doesn’t make sense that a population is getting better if for every increase of IQ points and memory we add a kidney defect…..thus there is the never ending clash of “fitness” in the pop gen sense vs. fitness in the medical sense. So as far as Michael Lynch’s studies on “compensatory mutations”, they are of little comfort to those suffering heritable diseases, since having 13 babies like Octomom doesn’t necessarily translate into more personal well being….

    I call the above formula the Bonkers Formula because Graur claimed ENCODE was “bonkers” because of that formula (though I think he punched some numbers in his calculator wrong). It also seems to accord with what you said on page 157-158 of your book for recombining diploid population in relation to ENCODE:

    Clearly an organism with as much DNA as we have would be in severe trouble. Yet in humans well
    over 98% of all newborns survive to adulthood in most industrial countries.

    WHY WE AREN’T ALL DEAD. There are several possible resolutions of the dilemma. If
    much of the DNA is simply “spacer” DNA whose sequence is irrelevant, then there will
    be a far smaller mutational load.
    ….
    The mutational load calculation continues to be relevant to understanding whether
    most eukaryotic DNA has any function that is visible to natural selection. Recent announcements
    (Encode Project Consortium, 2012) that 80% of human DNA is “functional”, based on finding some transcription or binding of transcription factors in it, are very misleading. Junk DNA is still junk DNA, however often its demise has been
    announced.

    So my reading of what you say is that independent of Fisher’s theorem, there is a point where enough bad mutation will lead to decline. My understanding (which could be wrong) is that mutational load may or may not be independently derivable from Fisher’s theorem? Is that right?

    Then there are the mutation selection formulas
    https://en.wikipedia.org/wiki/Mutation%E2%80%93selection_balance
    haploid:
    q=\frac{\mu}{s}

    diploid:
    q \approx \sqrt{\frac{\mu}{s}}

    These aren’t derived from Fisher’s formula, as far as I can tell. Aren’t they for the infinite population size case, and aren’t they assuming s is some fixed value rather than s being a mean? Do those formulas apply in the finite population case???

    Thanks in advance.

    fundy theorems

Comments are closed.