If you accept a clear answer to a complex question of fact, any evidence for an answer other than that one will be experienced as prediction error. If you decline to pick an answer, any evidence on the matter will show up as a (perhaps lesser) prediction error, for you failed to anticipate even a general direction of the evidence.
Thus I expect the predictive-processing human brain to prefer to stick to a clear answer. That way, it will avoid the consistent error stream in the second case, in favour of intermittent error from the first case.
Once a brain picks one answer, in its drive to minimise prediction error, it would deliberately reduce exposure to contradictory evidence. When the answer to a question is non-obvious and debated — less "what colour is Earth's daytime sky under normal conditions?" or "how many legs do I have?" and more "what causes contemporary common obesity?" or "will AI kill us all?" — you will encounter significant amounts of evidence in multiple directions. Thus one avoids more contradictory evidence on less obvious questions, for there is more error-inducing evidence to avoid.
A brain which already accepted a well-evidentially-supported answer to the question would not change its favoured answer from a large amount of evidence otherwise. Tho changing the answer to the correct, more-evidentially-supported one would reduce prediction error, on ambiguous questions (as characterised before), the reduction would be small, unless the new hypothesis matches far more evidence than the old hypothesis. That small reduction may be easily outweighed by the bias induced by evidence avoidance.
All this is to say that the behaviours of confirmation bias come as a natural result of the principles of predictive processing.
But maybe I misunderstand predictive processing, or otherwise recklessly erred, and thereby confirmed my existing belief that predictive processing explains everything about the human mind.