Speed Kills, But Differential Speed Kills Better

Last week I reviewed “The Probability of God” by Stephen D. Unwin, and mentioned that he isolated six parameters to use to compute a series of better estimates of his probability starting from the usual “I don’t know, and therefore the probability is 0.5.”

There is a problem with his approach, which he acknowledges. The problem is that he starts with an a priori probability of 0.5 and computes the posteriori probability after evaluating the effects of the first parameter. Then he applies the same Bayesian calculations to that first result based on his evaluation of the second parameter and so on until he arrives at the probability of a personal-God existing as 0.67. Each of the steps is treated as an independent bit of evidence being added to the mix. Therein is a problem.

Perhaps pursuing a less lofty goal would illustrate the problem better. Suppose we are insurance statisticians computing the probability of having a accident on a particular stretch of freeway. We select two parameters to correlate with observed accidents: the time of day, and the average speed on the road. However, if you think about it, the average speed is well-correlated with the time. Speeds are higher from midnight to 4 AM than during either of the two rush hours. In other words, the two parameters are not independent. If we calculate the posteriori probability of having an accident knowing the time of day and then re-calculate from that result using the average speed at that time, our result might have some meaning, but not in the way we expect. While there are certainly ways to properly incorporate multiple parameters which are not independent (orthogonal), blindly assuming repeated computations will work is not correct.

On the other hand, if we used the time of day and the individual speed of the automobile to predict the probability of an accident, we would get much better results (better in the sense of being more close to giving the correct probability). It is still not totally correct since an individual’s speed is partially dependent on the average speed of the other automobiles and that is partially determined by the time of day. An even better second parameter would be to use the deviation of the individual’s speed from the average at that time. Even without going through the math, we can intuitively see that a person’s probability of having an accident when traveling at 80 mph is greater if the average speed at that time is 40 than if the average speed is also 80. In both cases, the car is traveling too fast, but in the second case, it is likely safer at that speed than it would be traveling at 40 when everyone else is going 80. (Doing 80 when the average is 120 would also be unwise, but that is unlikely.) The old adage “Speed Kills” is better rendered as “Speed Kills, but Differential Speed Kills Better.”

Going back to Unwin’s six parameters: (1) the recognition of goodness; (2) the existence of moral evil; (3) the existence of natural evil; (4) intra-natural miracles; (5) extra-natural miracles; and (6) religious experiences. Even without wading through his detailed attempts to define his terms carefully, we can see that these are not all mutually orthogonal parameters. Without careful definition of terms, these parameters are so muddled as to be really useless for any description of happenings in the physical world. For example, consider #2 and #3 in the presence of #4 and #5. If we assume the occurrence of miracles (carefully defined) has a purpose, then that purpose is reasonably linked to the existence of evil. That is, one suspects miracles would be performed to attempt to decrease evil in the world. If the miracles are successful, then the parameters are not independent. Notice I have not quibbled with his definitions or even challenged the existence of his parameters. I am only saying that they are not independent.

The only reason for dwelling on this real-world example is that exactly this problem recurs time and again in setting up formulations for estimating the probability of events given multiple parameter inputs. Another example that I have used in a previous column is separating red and white blood cells. You can use size as one parameter and color as another. Since red cells tend to be red, and white cells tend to be white, color alone gives a pretty good separation, but the populations overlap. So you might want to use size also. White cells tend to be bigger, but again the populations overlap. One way to improve the results is to define a new parameter which is a combination of color and size. Then using the measured value of this new parameter we can more accurately place individual cells in the proper category. That is, one can often combine partially related parameters to make one or more artificial parameters that are truly orthogonal.

Again, to be fair to Unwin, he admits that what he did was not strictly correct, but that it was correct enough for the purpose of his book. I’m sure he doesn’t make that mistake when siting nuclear power plants. Hmm… What does “correct enough” mean?

In response to the interest my original tutorial generated, I have completely rewritten and expanded it. Check out the tutorial availability through Lockergnome. The new version is over 100 pages long with chapters that alternate between discussion of the theoretical aspects and puzzles just for the fun of it. Puzzle lovers will be glad to know that I included an answers section that includes discussions as to why the answer is correct and how it was obtained. Most of the material has appeared in these columns, but some is new. Most of the discussions are expanded compared to what they were in the original column format.

[tags]The Probability of God, Stephen D. Unwin, statistic, decision theory[/tags]

Article Written by

  • http://www.facebook.com/xdaniel465 Daniel Landry

    This was very helpful information to me, as I am looking for a new job as a step towards my career… Thank you . :)