Awe has traditionally been considered a religious or spiritual emotion, yet scientists often report that awe motivates them to answer questions about the natural world, and to do so in naturalistic terms. Indeed, awe may be closely related to scientific discovery and theoretical advance. Awe is typically triggered by something vast (either literally or metaphorically) and initiates processes of accommodation, in which existing mental schemas are revised to make sense of the awe‐inspiring stimuli. This process of accommodation is essential for the kind of belief revision that characterizes scientific reasoning and theory change. Across six studies, we find that the tendency to experience awe is positively associated with scientific thinking, and that this association is not shared by other positive emotions. Specifically, we show that the disposition to experience awe predicts a more accurate understanding of how science works, rejection of creationism, and rejection of unwarranted teleological explanations more broadly.
Can science explain romantic love, morality, and religious belief? We documented intuitive beliefs about the limits of science in explaining the human mind. We considered both epistemic evaluations (concerning whether science could possibly fully explain a given psychological phenomenon) and nonepistemic judgments (concerning whether scientific explanations for a given phenomenon would generate discomfort), and we identified factors that characterize phenomena judged to fall beyond the scope of science. Across six studies, we found that participants were more likely to judge scientific explanations for psychological phenomena to be impossible and uncomfortable when, among other factors, they support first-person, introspective access (e.g., feeling empathetic as opposed to reaching for objects), contribute to making humans exceptional (e.g., appreciating music as opposed to forgetfulness), and involve conscious will (e.g., acting immorally as opposed to having headaches). These judgments about the scope of science have implications for science education, policy, and the public reception of psychological science.
Teleological explanations, which appeal to a function or purpose (e.g., “kangaroos have long tails for balance”), seem to play a special role within the biological domain. We propose that such explanations are compelling because they are evaluated on the basis of a salient cue: structure-function fit, or the correspondence between a biological feature’s form (e.g., tail length) and its function (e.g., balance). Across five studies with 843 participants in total, we find support for three predictions that follow from this proposal. First, we find that function information decreases reliance on mechanistic considerations when evaluating explanations (Experiments 1- 3), indicating the presence of a salient, function-based cue. Second, we demonstrate that structure-function fit is the best candidate for this cue (Experiments 3-4). Third, we show that scientifically-unwarranted teleological explanations are more likely to be accepted under speeded and unspeeded conditions when they are high in structure-function fit (Experiment 5). Experiment 5 also finds that structure-function fit extends beyond biology to teleological explanations in other domains. Jointly, these studies provide a new account of how teleological explanations are evaluated and why they are often (but not universally) compelling.
Young children often endorse explanations of the natural world that appeal to functions or purpose—for example, that rocks are pointy so animals can scratch on them. By contrast, most Western-educated adults reject such explanations. What accounts for this change? We investigated 4- to 5-year-old children’s ability to generalize the form of an explanation from examples by presenting them with novel teleological explanations, novel mechanistic explanations, or no explanations for 5 nonliving natural objects. We then asked children to explain novel instances of the same objects and novel kinds of objects. We found that children were able to learn and generalize explanations of both types, suggesting an ability to draw generalizations over the form of an explanation. We also found that teleological and mechanistic explanations were learned and generalized equally well, suggesting that if a domain-general teleological bias exists, it does not manifest as a bias in learning or generalization.
We report three experiments investigating whether people's judgments about causal relationships are sensitive to the robustness or stability of such relationships across a range of background circumstances. In Experiment 1, we demonstrate that people are more willing to endorse causal and explanatory claims based on stable (as opposed to unstable) relationships, even when the overall causal strength of the relationship is held constant. In Experiment 2, we show that this effect is not driven by a causal generalization's actual scope of application. In Experiment 3, we offer evidence that stable causal relationships may be seen as better guides to action. Collectively, these experiments document a previously underappreciated factor that shapes people's causal reasoning: the stability of the causal relationship.
Our goal in this paper is to experimentally investigate whether folk conceptions of explanation are psychologistic. In particular, are people more likely to classify speech acts as explanations when they cause understanding in their recipient? The empirical evidence that we present suggests this is so. Using the side-effect effect as a marker of mental state ascriptions, we argue that lay judgments of explanatory status are mediated by judgments of a speaker’s and/or audience’s mental states. First, we show that attributions of both understanding and explanation exhibit a side-effect effect. Next, we show that when the speaker’s and audience’s level of understanding is stipulated, the explanation side-effect effect goes away entirely. These results not only extend the side-effect effect to attributions of understanding, they also suggest that attributions of explanation exhibit a side-effect effect because they depend upon attributions of understanding, supporting the idea that folk conceptions of explanation are psychologistic.
As a strategy for exploring the relationship between understanding and knowledge, we consider whether epistemic luck – which is typically thought to undermine knowledge – undermines understanding. Questions about the etiology of understanding have also been at the heart of recent theoretical debates within epistemology. Kvanvig (2003) put forward the argument that there could be lucky understanding and produced an example that he deemed persuasive. Grimm (2006) responded with a case that, he argued, demonstrated that there could not be lucky understanding. In this paper, we empirically examine how participants' patterns of understanding attributions line up with the predictions of Kvanvig and Grimm. We argue that the data challenge Kvanvig's position. People do not differentiate between knowing-why and understanding-why on the basis of proper etiology: attributions of knowledge and understanding involve comparable (and minimal) roles for epistemic luck. We thus posit that folk knowledge and understanding are etiologically symmetrical.
Is morality intuitive or deliberative? This distinction can obscure the role of folk moral theories in moral judgment; judgments may arise “intuitively” yet result from abstract theoretical and philosophical commitments that participate in “deliberative” reasoning.
Research has found that when children or adults attempt to explain novel observations in the course of learning, they are more likely to discover patterns that support ideal explanations: explanations that are maximally simple and broad. However, not all learning contexts support such explanations. Can explaining facilitate discovery nonetheless? We present a study in which participants were tasked with discovering a rule governing the classification of items, where the items were consistent two non-ideal rules: one correctly classified 66% of cases, the other 83%. We find that when there is no ideal rule to be discovered (i.e., no 100% rule), participants prompted to explain are better than control participants at discovering the best available rule (i.e., the 83% rule). This supports the idea that seeking ideal explanations can be beneficial in a non-ideal world because the pursuit of an ideal explanation can facilitate the discovery of imperfect patterns along the way.
Both science and religion offer explanations for everyday events, but they differ with respect to their tolerance for mysteries. In the present research, we investigate laypeople's perceptions about the extent to which religious and scientific questions demand an explanation and the extent to which an appeal to mystery can satisfy that demand. In Study 1, we document a large domain difference between science and religion: scientific questions are judged to be more in need of explanation and less appropriately answered by appeal to mystery than religious questions. In Study 2, we demonstrate that these differences are not driven by differing levels of belief in the content of these domains. While the source of these domain differences remains unclear, we propose several hypotheses in the General Discussion.
Much of human learning throughout the lifespan is achieved through seeking and generating explanations. However, very little is known about what triggers a learner to seek an explanation. In two studies, we investigate what makes a given event or phenomenon stand in need of explanation. In Study 1, we show that a learner's judgment of "need for explanation" for a given question predicts that learner's likelihood of seeking an answer to this question. In Study 2, we explore several potential predictors of need for explanation. We find that the need for explanation is greater for questions expected to have useful answers that require expert understanding, and that "need for explanation" can be differentiated from general curiosity.
Our actions and decisions are regularly influenced by the social environment around us. Can social environment be leveraged to induce curiosity and facilitate subsequent learning? Across two experiments, we show that curiosity is contagious: social environment can influence people's curiosity about the answers to scientific questions. Our findings show that people are more likely to become curious about the answers to more popular questions, which in turn influences the information they choose to reveal. Given that curiosity has been linked to better learning, these findings have important implications for education.
Explanations often highlight inductively rich relationships that support further generalizations: learning that the knife is sharp because it is for cutting, we correspondingly infer that other things for cutting might also be sharp. When do children appreciate that explanations are good guides to generalization? We report a study in which 108 4- to 7-year-old children evaluated mechanistic, functional, and categorical explanations for the properties of objects, and subsequently generalized those properties to novel objects on the basis of shared mechanisms, functions, or category membership. Older children, but not younger children, were significantly more likely to generalize when the explanation they had received matched the subsequent basis for generalization (e.g., generalizing on the basis of a shared mechanism after hearing a mechanistic explanation). These findings shed light on how explanation and generalization become coordinated in development, as well as the role of explanations in young children’s learning.
Can opium's tendency to induce sleep be explained by appeal to a "dormitive virtue"? If the label merely references the tendency being explained, the explanation seems vacuous. Yet the presence of a label could signal genuinely explanatory content concerning the (causal) basis for the property being explained. In Experiments 1 and 2, we find that explanations for a person's behavior that appeal to a named tendency or condition are indeed judged to be more satisfying than equivalent explanations that differ only in omitting the name. In Experiment 3, we find support for one proposal concerning what it is about a name that drives a boost in explanatory satisfaction: named categories lead people to draw an inference to the existence of a cause underlying the category, a cause that is responsible for the behavior being explained. Our findings have implications for theories of explanation and point to the central role of causation in explaining behavior.
If someone brings about an outcome without intending to, is she causally and morally responsible for it? What if she acts intentionally, but as the result of manipulation by another agent? Previous research has shown that an agent's mental states can affect attributions of causal and moral responsibility to that agent, but little is known about what effect one agent's mental states can have on attributions to another agent. In Experiment 1, we replicate findings that manipulation lowers attributions of responsibility to manipulated agents. Experiments 2-7 isolate which features of manipulation drive this effect, a crucial issue for both philosophical debates about free will and attributions of responsibility in situations involving social influence more generally. Our results suggest that "bypassing" a manipulated agent's mental states generates the greatest reduction in responsibility, and we explain our results in terms of the effects that one agent's mental states can have on the counterfactual relations between another agent and an outcome.
When evaluating causal explanations, simpler explanations are widely regarded as better explanations. However, little is known about how people assess simplicity in causal explanations or what the consequences of such a preference are. We contrast 2 candidate metrics for simplicity in causal explanations: node simplicity (the number of causes invoked in an explanation) and root simplicity (the number of unexplained causes invoked in an explanation). Across 4 experiments, we find that explanatory preferences track root simplicity, not node simplicity; that a preference for root simplicity is tempered (but not eliminated) by probabilistic evidence favoring a more complex explanation; that committing to a less likely but simpler explanation distorts memory for past observations; and that a preference for root simplicity is greater when the root cause is strongly linked to its effects. We suggest that a preference for root-simpler explanations follows from the role of explanations in highlighting and efficiently representing and communicating information that supports future predictions and interventions. (PsycINFO Database Record
Are explanations of different kinds (formal, mechanistic, teleological) judged differently depending on their contextual utility, defined as the extent to which they support the kinds of inferences required for a given task? We report three studies demonstrating that the perceived "goodness" of an explanation depends on the evaluator's current task: Explanations receive a relative boost when they support task-relevant inferences, even when all three explanation types are warranted. For example, mechanistic explanations receive higher ratings when participants anticipate making further inferences on the basis of proximate causes than when they anticipate making further inferences on the basis of category membership or functions. These findings shed light on the functions of explanation and support pragmatic and pluralist approaches to explanation.
Research suggests that the process of explaining influences causal reasoning by prompting learners to favor hypotheses that offer "good" explanations. One feature of a good explanation is its simplicity. Here, we investigate whether prompting children to generate explanations for observed effects increases the extent to which they favor causal hypotheses that offer simpler explanations, and whether this changes over the course of development. Children aged 4, 5, and 6 years observed several outcomes that could be explained by appeal to a common cause (the simple hypothesis) or two independent causes (the complex hypothesis). We varied whether children were prompted to explain each observation or, in a control condition, to report it. Children were then asked to make additional inferences for which the competing hypotheses generated different predictions. The results revealed developmental differences in the extent to which children favored simpler hypotheses as a basis for further inference in this task: 4-year-olds did not favor the simpler hypothesis in either condition; 5-year-olds favored the simpler hypothesis only when prompted to explain; and 6-year-olds favored the simpler hypothesis whether or not they explained.
Although storybooks are often used as pedagogical tools for conveying moral lessons to children, the ability to spontaneously extract "the moral" of a story develops relatively late. Instead, children tend to represent stories at a concrete level - one that highlights surface features and understates more abstract themes. Here we examine the role of explanation in 5- and 6-year-old children's developing ability to learn the moral of a story. Two experiments demonstrate that, relative to a control condition, prompts to explain aspects of a story facilitate children's ability to override salient surface features, abstract the underlying moral, and generalize that moral to novel contexts. In some cases, generating an explanation is more effective than being explicitly told the moral of the story, as in a more traditional pedagogical exchange. These findings have implications for moral comprehension, the role of explanation in learning, and the development of abstract reasoning in early childhood.