Confirmation bias and social reasoning
There is an apparent gap between bayesian approaches to understand brain function on the one hand, and cognitive biases such as confirmation bias on the other.
Sure, you would expect humans to have some inaccuracies in their mental model building processes just as a result of being imperfectly optimised products of evolution. And you would expect some biases in favour of safety on the grounds that false negatives are more costly than false positives: it’s better to be startled by something that turns out not to be a predator than it is to fail to be startled by an actual predator.
But confirmation bias seems hard to explain like that. It appears to be a specific and purposeful deviation from the ideal, and not merely the failure of evolution to produce a perfectly optimised design.
One of the things that Haidt points out in The Righteous Mind is that people are social reasoners: they rarely change their minds on their own. If they hear a convincing argument from someone else then they might change their minds, but people don’t just notice when they’re mistaken.
Perhaps we seek to learn what our culture believes. Being able to correctly predict the beliefs and opinions of the in-group might be more important than accuracy in a number of domains, particularly where there is a political conflict about that subject. And if that were so, then accuracy and noticing mistakes would be disadvantageous. Doing so would knock you out of alignment with the group. Indeed, you might even mark yourself as a member of the out-group, which could be disastrous.