The August 2011 issue of *Scientific American* has an interesting article by Mario Livio called “Why Math Works.” The issue he addresses applies also to the underpinnings of logic and decision theory. The puzzle starts with the observation that in the real world of sensory experience, mathematics can be used with stunning accuracy to describe and predict events. It is not obvious why this should be so. Philosophers have worried about this for thousands of years.

We so often use mathematics to describe the universe that most of the time we do not even give it a second thought. But philosophers since the beginning of counting have wondered why the world can be described with remarkable accuracy (as accurate as we can measure — sometimes to one part in a trillion or less) by equations based on mathematics. There are two venerable explanations: mathematics is a thing that exists by itself and we discover it in nature, or mathematics is a tool that humans invent to solve problems. In the second scenario, there is no mystery why mathematics works, because the inventions would have only survived if they were useful. Mathematics that gave results counter to measurements would be a problem and forgotten.

I tend to be in the second camp simply because I see no empirical reason to believe that theorems, proofs, diagrams, etc. exist absent human minds to consider them. But just as many people in the world believe in things they cannot see or detect, I am enticed by the thought that mathematics has some kind of existence outside of human domain. (For further reading, see Plato).

The same considerations arise with decision theory and the study of statistics and probability in general. (Remember, statistics refers to the past and is defined by measurement and is therefore secure. Probability refers to future measurements and therefore is less secure. Probability can be derived from statistics.) To make good decisions, we need a mixture of logic, probability theory, and past measurements. And when we make the best possible decision based on the best modeling and best input, we can always be surprised by having selected the absolute worst choice in that instance.

That is, decision theory really only helps us pick the best choice on the average, assuming we could run the same event many times with exactly the same parameters. Most often the decision we are trying to make is a singular event. The conditions will never be exactly the same again. In that case, what does it mean to say you made the best choice? In a multiverse with infinite variations on you and your decision making anguish, one could in principle sum up the results over an infinite number of trials and confirm which decision was the best on average. Since we do not know for sure that a multiverse exists in that sense, and even if it did, that we could make the required measurements, what is the underlying reality or meaning of a predicted best choice based on decision theory?

A similar problem arises with weather prediction. What exactly does “10% chance of rain tomorrow” mean? It sort of means that on the average, when all the common measurements normally acquired with some normal accuracy and analyzed with some standard program, when certain allowed variations are considered, one day in ten will experience rain. Given the same parameters and a better computer model, that number might change. Or given better input data, the number might change. Is there a reality to this prediction beyond this utilitarian view?