Explaining guides learners towards perfect patterns, not perfect prediction
Type
When learners explain to themselves as they encounter new information, they recruit a suite of processes that influence subsequent learning. One consequence is that learners are more likely to discover exceptionless rules that underlie what they are trying to explain. Here we investigate what it is about exceptionless rules that satisfies the demands of explanation. Are exceptions unwelcome because they lower predictive accuracy, or because they challenge some other explanatory ideal, such as simplicity and breadth? To compare these alternatives, we introduce a causally rich property explanation task in which exceptions to a general rule are either arbitrary or predictable. If predictive accuracy is sufficient to satisfy the demands of explanation, the introduction of a rule plus exception that supports perfect prediction should block the discovery of a more subtle but exceptionless rule. Across two experiments, we find that effects of explanation go beyond attaining perfect prediction.