As theorists we work with many definitions of rationality, all of them far from perfect. Here’s a contribution from Karl Popper, from The Logic of Scientific Discovery:


“…I equate the rational attitude and the critical attitude. The point is that, whenever we propose a solution to a problem, we ought to try as hard as we can to overthrow our solution, rather than defend it. Few of us, unfortunately, practice this precept…”


All of our axiomatic definitions of rationality have the flavor of presenting certain possible avenues of self-criticism to a decision-maker, and calling him rational if he is immune to these. This attitude is typified by the famous story of Savage’s reaction to the Allais paradox: he initially made the “normal” pair of choices, which contradict the substitution axiom. When this was pointed out, he considered his decisions a mistake.

Of course, any set of formal rules will leave out some possible criticisms, i.e. be insufficient for rationality, and conversely there is almost always an argument that a principle, however compelling, is not necessary for rationality. Furthermore, each principle takes a certain amount of mental energy to check, and the art of good decision-making must involve making wise decisions as to which principles to prioritize. Decision theorists can, in principle aid this process by proving equivalence results: “If you want to follow Savage’s axioms, use a subjective probability distribution.” Unfortunately, it is difficult to conceive that there will ever be a Grand Unified decision theory which aids the decision-maker in avoiding every possible criticism. Actually, such a theory would be tantamount to “strong AI,” the problem of building a machine which mimics or exceeds human capacities, which is considered at least decades away. Decision theory is not so ambitious, but merely tries to help people avoid selected mistakes in well-defined areas.