The following summary was written by Frank Voisin, who regularly writes for Frankly Speaking. Recently, Frank sold four restaurants and returned to school to complete a combined LLB/MBA.

It seems very obvious at this point in the book, but Ayres uses all of Chapter 5 to show dozens of examples to prove that traditional experts are not up to par with super crunching. One problem is that most experts are overconfident, ignoring their potential faults at the expense of further investigation. Another problem is that many problems are too complex for the human brain to accurately crunch – we just aren’t good at permutations and combinations and statistical modeling! We are “damnably overconfident about our predictions and slow to change them in the face of new evidence”

Evidence shows that human judges are “not merely worse than optimal regression equations; they are worse than almost any regression equation”! Regression is not only more accurate, but it is upfront with the proportion of the time the prediction is going to be true.

However, regressions only work in the aggregate, and individual occurrences may cause certain cases to become outliers. These are not statistically significant enough to affect a regression output, but are important enough in certain situations that would warrant human oversight. Though, this must be weighed with our human biases that are overconfidence in our ability to outperform the system.

Conclusion: Statistical tools are extremely useful in guiding the decisions of experts, but should not obliterate expert discretion. To surrender discretion to the machines, we will lose the ability to take things into account which, while not statistically significant, DO play a role in individual cases. The greatest potential lies in situations where machines guide the discretion of experts based on statistical probabilities toward the best outcome (This is where my previous post, Occam’s Razor, comes into play). We have some use after all!