Research in education and cognitive development suggests that explaining plays a key role in learning and generalization: when learners provide explanations – even to themselves – they learn more effectively and generalize more readily to novel situations. This paper explores a potential mechanism underlying this effect, motivated by philosophical accounts of the structure of explanations: that explaining guides learners to interpret observations in terms of unifying patterns or regularities, which in turn promotes the discovery of broad generalizations. Experiment 1 finds that prompting participants to explain while learning artificial categories promotes the induction of a broad generalization underlying category membership. Experiment 2 suggests that explanation most readily prompts discovery in the presence of anomalies: observations inconsistent with current beliefs. Experiment 1 additionally suggests that explaining might result in reduced memory for details. These findings provide evidence for the proposed mechanism and insight into the potential role of explanation in discovery and generalization.
Many students reject evolutionary theory, whether or not they adequately understand basic evolutionary concepts. We explore the hypothesis that accepting evolution is related to understanding the nature of science. In particular, students may be more likely to accept evolution if they understand that a scientific theory is provisional but reliable, that scientists employ diverse methods for testing scientific claims, and that relating data to theory can require inference and interpretation. In a study with university undergraduates, we find that accepting evolution is significantly correlated with understanding the nature of science, even when controlling for the effects of general interest in science and past science education. These results highlight the importance of understanding the nature of science for accepting evolution. We conclude with a discussion of key characteristics of science that challenge a simple portrayal of the scientific method and that we believe should be emphasized in classrooms.
What makes some explanations better than others? This paper explores the roles of simplicity and probability in evaluating competing causal explanations. Four experiments investigate the hypothesis that simpler explanations are judged both better and more likely to be true. In all experiments, simplicity is quantified as the number of causes invoked in an explanation, with fewer causes corresponding to a simpler explanation. Experiment 1 confirms that all else being equal, both simpler and more probable explanations are preferred. Experiments 2 and 3 examine how explanations are evaluated when simplicity and probability compete. The data suggest that simpler explanations are assigned a higher prior probability, with the consequence that disproportionate probabilistic evidence is required before a complex explanation will be favored over a simpler alternative. Moreover, committing to a simple but unlikely explanation can lead to systematic overestimation of the prevalence of the cause invoked in the simple explanation. Finally, Experiment 4 finds that the preference for simpler explanations can be overcome when probability information unambiguously supports a complex explanation over a simpler alternative. Collectively, these findings suggest that simplicity is used as a basis for evaluating explanations and for assigning prior probabilities when unambiguous probability information is absent. More broadly, evaluating explanations may operate as a mechanism for generating estimates of subjective probability.
Unlike educated adults, young children demonstrate a "promiscuous" tendency to explain objects and phenomena by reference to functions, endorsing what are called teleological explanations. This tendency becomes more selective as children acquire increasingly coherent beliefs about causal mechanisms, but it is unknown whether a widespread preference for teleology is ever truly outgrown. The study reported here investigated this question by examining explanatory judgments in patients with Alzheimer's disease (AD), whose dementia affects the rich causal beliefs adults typically consult in evaluating explanations. The results indicate that unlike healthy adults, AD patients systematically and promiscuously prefer teleological explanations, suggesting that an underlying tendency to construe the world in terms of functions persists throughout life. This finding has broad relevance not only to understanding conceptual impairments in AD, but also to theories of development, learning, and conceptual change. Moreover, this finding sheds light on the intuitive appeal of creationism.
Teleological explanations (TEs) account for the existence or properties of an entity in terms of a function: we have hearts because they pump blood, and telephones for communication. While many teleological explanations seem appropriate, others are clearly not warranted--for example, that rain exists for plants to grow. Five experiments explore the theoretical commitments that underlie teleological explanations. With the analysis of [Wright, L. (1976). Teleological Explanations. Berkeley, CA: University of California Press] from philosophy as a point of departure, we examine in Experiment 1 whether teleological explanations are interpreted causally, and confirm that TEs are only accepted when the function invoked in the explanation played a causal role in bringing about what is being explained. However, we also find that playing a causal role is not sufficient for all participants to accept TEs. Experiment 2 shows that this is not because participants fail to appreciate the causal structure of the scenarios used as stimuli. In Experiments 3-5 we show that the additional requirement for TE acceptance is that the process by which the function played a causal role must be general in the sense of conforming to a predictable pattern. These findings motivate a proposal, Explanation for Export, which suggests that a psychological function of explanation is to highlight information likely to subserve future prediction and intervention. We relate our proposal to normative accounts of explanation from philosophy of science, as well as to claims from psychology and artificial intelligence.
Generating and evaluating explanations is spontaneous, ubiquitous and fundamental to our sense of understanding. Recent evidence suggests that in the course of an individual's reasoning, engaging in explanation can have profound effects on the probability assigned to causal claims, on how properties are generalized and on learning. These effects follow from two properties of the structure of explanations: explanations accommodate novel information in the context of prior beliefs, and do so in a way that fosters generalization. The study of explanation thus promises to shed light on core cognitive issues, such as learning, induction and conceptual representation. Moreover, the influence of explanation on learning and inference presents a challenge to theories that neglect the roles of prior knowledge and explanation-based reasoning.
The current debate over whether to teach Intelligent Design creationism in American public schools provides the rare opportunity to watch the interaction between scientific knowledge and intuitive beliefs play out in courts rather than cortex. Although it is tempting to think the controversy stems only from ignorance about evolution, a closer look reinforces what decades of research in cognitive and social psychology have already taught us: that the relationship between understanding a claim and believing a claim is far from simple. Research in education and psychology confirms that a majority of college students fail to understand evolutionary theory, but also finds no support for a relationship between understanding evolutionary theory and accepting it as true. We believe the intuitive appeal of Intelligent Design owes as much to misconceptions about science and morality as it does to misconceptions about evolution. To support this position we present a brief tour of misconceptions: evolutionary, scientific and moral.
The classical receptive field (RF) concept-the idea that a visual neuron responds to fixed parts and properties of a stimulus-has been challenged by a series of recent physiological results. Here, we extend these findings to human vision, demonstrating that the extent of spatial averaging in contrast perception is also flexible, depending strongly on stimulus contrast and uniformity. At low contrast, spatial averaging is greatest (about 11 min of arc) within uniform regions such as edges, as expected if the relevant neurons have orientation-selective RFs. At high contrast, spatial averaging is minimal. These results can be understood if the visual system is balancing a trade-off between noise reduction, which favours large areas of averaging, and detail preservation, which favours minimal averaging. Two distinct populations of neurons with hard-wired RFs could account for our results, as could the more intriguing possibility of dynamic, contrast-dependent RFs.
Recent neuropsychological and imaging data have implicated different brain networks in the processing of different word classes, nouns being linked primarily to posterior, visual object-processing regions and verbs to frontal, motor-processing areas. However, as most of these studies have examined words in isolation, the consequences of such anatomically based representational differences, if any, for the processing of these items in sentences remains unclear. Additionally, in some languages many words (e.g. `drink’) are class-ambiguous, i.e. they can play either role depending on context, and it is not yet known how the brain stores and uses information associated with such lexical items in context. We examined these issues by recording event-related potentials (ERPs) in response to unambiguous nouns (e.g. `beer’), unambiguous verbs (e.g. `eat’), class-ambiguous words and pseudowords used as nouns or verbs within two types of minimally contrastive sentence contexts: noun-predicting (e.g. `John wanted THE [target] but …’) and verb-predicting (`John wanted TO [target] but …’). Our results indicate that the nature of neural processing for nouns and verbs is a function of both the type of stimulus and the role it is playing. Even when the context completely specifies their role, word class-ambiguous items differ from unambiguous ones over frontal regions by ~150 ms. Moreover, whereas pseudowords elicit larger N400s when used as verbs than when used as nouns, unambiguous nouns and ambiguous words used as nouns elicit more frontocentral negativity than unambiguous verbs and ambiguous words used as verbs, respectively. Additionally, unambiguous verbs elicit a left-lateralized, anterior positivity (~200 ms) not observed for any other stimulus type, though only when these items are used appropriately as verbs (i.e. in verb-predicting contexts). In summary, the pattern of neural activity observed in response to lexical items depends on their general probability of being a verb or a noun and on the particular role they are playing in any given sentence. This implicates more than a simple two-way distinction of the brain networks involved in their storage and processing. Experience, as well as context during on-line language processing, clearly shapes the neural representations of nouns and verbs, such that there is no single neural marker of word class. Our results further suggest that the presence and nature of the word class-based dissociations observed after brain damage are similarly likely to be a function of both the type of stimulus and the context in which it occurs, and thus must be assessed accordingly.