Mental simulation – such as imagining tilting a glass to figure out the angle at which water would spill – can be a way of coming to know the answer to an internally or externally posed query. Is this form of learning a species of inference or a form of observation? We argue that it is neither: learning through simulation is a genuinely distinct form of learning. On our account, simulation can support learning the answer to a query even when the basis for that answer is opaque to the learner. Moreover, through repeated simulation, the learner can reduce this opacity, supporting self-training and the acquisition of more accurate models of the world. Simulation is thus an essential part of the story of how creatures like us become effective learners and knowers.
People often answer why-questions with what we call experiential explanations: narratives or stories with temporal structure and concrete details. In contrast, on most theories of the epistemic function of explanation, explanations should be abstractive: structured by general relationships and lacking extraneous details. We suggest that abstractive and experiential explanations differ not only in level of abstraction, but also in structure, and that each form of explanation contributes to the epistemic goals of individual learners and of science. In particular, experiential explanations support mental simulation and survive transitions across background theories; as a result, they support learning and help us translate between competing frameworks. Experiential explanations play an irreducible role in human cognition – and perhaps in science.
The capacity to search for information effectively by asking informative questions is crucial for self-directed learning and develops throughout the preschool years and beyond. We tested the hypothesis that explaining observations in a given domain prepares children to ask more informative questions in that domain, and that it does so by promoting the identification of features that apply to multiple objects, thus supporting more effective questions. Across two experiments, 4- to 7-year-old children (N = 168) were prompted to explain observed evidence or to complete a control task prior to a 20- questions game. We found that prior prompts to explain led to a decrease in the number of questions needed to complete the game, but only for older children (ages 6-7). Moreover, we found that effects of explanation manifested as a shift away from questions that targeted single objects. These findings shed light on the development of question- asking in childhood and on the role of explanation in learning.
How do scientific explanations for beliefs affect people’s confidence in those beliefs? For example, do people think neuroscientific explanations for religious belief support or challenge belief in God? In five experiments, we find that the effects of scientific explanations for belief depend on whether the explanations imply normal or abnormal functioning (e.g., if a neural mechanism is doing what it evolved to do). Experiments 1 and 2 find that people think brain- based explanations for religious, moral, and scientific beliefs corroborate those beliefs when the explanations invoke a normally functioning mechanism, but not an abnormally functioning mechanism. Experiment 3 demonstrates comparable effects for other kinds of scientific explanations (e.g., genetic explanations). Experiment 4 confirms that these effects derive from (im)proper functioning, not statistical (in)frequency. Experiment 5 suggests that these effects interact with people’s prior beliefs to produce motivated judgments: People are more skeptical of scientific explanations for their own beliefs if the explanations appeal to abnormal functioning, but they are less skeptical of scientific explanations of opposing beliefs if the explanations appeal to abnormal functioning. These findings suggest that people treat “normality” as a proxy for epistemic reliability and reveal that folk epistemic commitments shape attitudes towards scientific explanations.
This chapter introduces “learning by thinking” (LbT) as a form of learning distinct from familiar forms of learning through observation. When learning by thinking, the learner gains genuinely new insight in the absence of novel observations “outside the head.” Scientific thought experiments are canonical examples, but the phenomenon is much more widespread, and includes learning by explaining to oneself, through analogical reasoning, or through mental simulation. The chapter argues that episodes of LbT can be re-expressed as explicit arguments or inferences but are neither psychologically nor epistemically reducible to explicit arguments or inferences, and that this partially explains the novelty of the conclusions reached through LbT. It also introduces a new perspective on the epistemic value of LbT processes as practices with potentially beneficial epistemic consequences, even when the commitments they invoke and the conclusions they immediately deliver are not themselves true.
Lombrozo, T., & Wilkenfeld, D. (2019). Mechanistic versus functional understanding. In S. R. Grimm (Ed.), Varieties of Understanding: New Perspectives from Philosophy, Psychology, and Theology (pp. 209-229) . New York, NY: Oxford University Press.Abstract
Many natural and artificial entities can be predicted and explained both mechanistically, in term of parts and proximate causal processes, as well as functionally, in terms of functions and goals. Do these distinct “stances” or “modes of construal” support fundamentally different kinds of understanding? Based on recent work in epistemology and philosophy of science, as well as empirical evidence from cognitive and developmental psychology, we argue for what we call the “weak differentiation thesis”: the claim that mechanistic and functional understanding are distinct in that they involve importantly different objects. We also consider more tentative arguments for the “strong differentiation thesis”: the claim that mechanistic and functional understanding involve different epistemic relationships between mind and world.
Generating explanations can be highly effective in promoting category learning; however, the underlying mechanisms are not fully understood. We propose that engaging in explanation can recruit comparison processes, and that this in turn contributes to the effectiveness of explanation in supporting category learning. Three experiments evaluated the interplay between explanation and various comparison strategies in learning artificial categories. In Experiment 1, as expected, prompting participants to explain items’ category membership led to (a) higher ratings of self-reported comparison processing and (b) increased likelihood of discovering a rule underlying category membership. Indeed, prompts to explain led to more self- reported comparison than did direct prompts to compare pairs of items. Experiment 2 showed that prompts to compare all members of a particular category (“group comparison”) were more effective in supporting rule learning than were pairwise comparison prompts. Experiment 3 found that group comparison (as assessed by self-report) partially mediated the relationship between explanation and category learning. These results suggest that one way in which explanation benefits category learning is by inviting comparisons in the service of identifying broad patterns.
Most theories of kind representation suggest that people posit internal, essence-like factors believed to underlie kind membership and the observable properties of members. Across two studies (N = 234), we show that adults can construe properties of social kinds as products of both internal and structural (stable external) factors. Internalist and structural construals are similar in that both support formal explanations (i.e., “category member has property P due to category membership C”), generic claims (“Cs have P”), and a particular pattern of generalization to individuals when the individuals’ category membership and structural position are preserved. Our findings thus challenge these phenomena as signatures of essentialist thinking. However, once category membership and structural position are unconfounded, different patterns of generalization emerge across internalist and structural construals, as do different judgments concerning category definitions and property mutability. These findings have important implications for reasoning about social kinds.
Evidence is typically consistent with more than one hypothesis. How do we decide which hypothesis to pursue (e.g., to subject to further consideration and testing)? Research has shown that explanatory considerations play an important role in learning and inference: we tend to seek and favor hypotheses that offer good explanations for the evidence we invoke them to explain. Here we report three studies testing the proposal that explanatory considerations similarly inform decisions concerning pursuit. We find that ratings of explanatory goodness predict pursuit (though to a lesser extent than they predict belief), and that these effects hold after adjusting for subjective probability. These findings contribute to a growing body of work suggesting an important role for explanatory considerations in shaping inquiry.
Explanations not only increase understanding; they are often deeply satisfying. In the present research, we explore how this phenomenological sense of “explanatory satisfaction” relates to the functional role of explanation within the process of inquiry. In two studies, we address the following questions: 1) Does explanatory satisfaction track the epistemic, learning-directed features of explanation? and 2) How does explanatory satisfaction relate to both antecedent and subsequent curiosity? In answering these questions, we uncover novel determinants of explanatory satisfaction and contribute to the broader literature on explanation and inquiry.
Curiosity is considered essential for learning and sustained engagement, yet stimulating curiosity in educational contexts remains a challenge. Can people’s curiosity about a topic be stimulated by evidence that the topic has potential value? In two experiments we show that increasing people’s perceptions about the usefulness of a scientific topic also influences their curiosity and subsequent information search. Our results also show that simply presenting interesting facts is not enough to influence curiosity, and that people are more likely to be curious about a topic if they perceive it to be directly valuable to them. Given the link between curiosity and learning, these results have important implications for science communication and education more broadly.
Scientific norms value skepticism; many religious traditions value faith. We test the hypothesis that these different attitudes towards inquiry and belief result in different inferences from epistemic behavior: Whereas the pursuit of evidence or explanations is taken as a signal of commitment to science, forgoing further evidence and explanation is taken as a signal of commitment to religion. Two studies (N = 401) support these predictions. We also find that deciding to pursue inquiry is judged more moral and trustworthy, with moderating effects of participant religiosity and scientism. These findings suggest that epistemic behavior can be a social signal, and shed light on the epistemic and social functions of scientific vs. religious belief.
Representations of social categories help us make sense of the social world, supporting predictions and explanations about groups and individuals. In an experiment with 156 participants, we explore whether children and adults are able to understand category-property associations (such as the association between “girls” and “pink”) in structural terms, locating an object of explanation within a larger structure and identifying structural constraints that act on elements of the structure. We show that children as young as 3-4 years old show signs of structural thinking, and that 5-6 year olds show additional differentiation between structural and nonstructural thinking, yet still fall short of adult performance. These findings introduce structural connections as a new type of non-accidental relationship between a property and a category, and present a viable alternative to internalist accounts of social categories, such as psychological essentialism.
Occam's razor-the idea that all else being equal, we should pick the simpler hypothesis-plays a prominent role in ordinary and scientific inference. But why are simpler hypotheses better? One attractive hypothesis known as Bayesian Occam's razor (BOR) is that more complex hypotheses tend to be more flexible-they can accommodate a wider range of possible data-and that flexibility is automatically penalized by Bayesian inference. In two experiments, we provide evidence that people's intuitive probabilistic and explanatory judgments follow the prescriptions of BOR. In particular, people's judgments are consistent with the two most distinctive characteristics of BOR: They penalize hypotheses as a function not only of their numbers of free parameters but also as a function of the size of the parameter space, and they penalize those hypotheses even when their parameters can be "tuned" to fit the data better than comparatively simpler hypotheses.
Much recent work on explanation in the interventionist tradition emphasizes the explanatory value of stable causal generalizations—i.e., causal generalizations that remain true in a wide range of background circumstances. We argue that two separate explanatory virtues are lumped together under the heading of `stability’. We call these two virtues breadth and guidancerespectively. In our view, these two virtues are importantly distinct, but this fact is neglected or at least under-appreciated in the literature on stability. We argue that an adequate theory of explanatory goodness should recognize breadth and guidance as distinct virtues, as breadth and guidance track different ideals of explanation, satisfy different cognitive and pragmatic ends, and play different theoretical roles in (for example) helping us understand the explanatory value of mechanisms. Thus keeping track of the distinction between these two forms of stability yields a more accurate and perspicuous picture of the role that stability considerations play in explanation.
An actor's mental states-whether she acted knowingly and with bad intentions-typically play an important role in evaluating the extent to which an action is wrong and in determining appropriate levels of punishment. In four experiments, we find that this role for knowledge and intent is significantly weaker when evaluating transgressions of conventional rules as opposed to moral rules. We also find that this attenuated role for knowledge and intent is partly due to the fact that conventional rules are judged to be more arbitrary than moral rules; whereas moral transgressions are associated with actions that are intrinsically wrong (e.g., hitting another person), conventional transgressions are associated with actions that are only contingently wrong (e.g., wearing pajamas to school, which is only wrong if it violates a dress code that could have been otherwise). Finally, we find that it is the perpetrator's belief about the arbitrary or non-arbitrary basis of the rule-not the reality-that drives this differential effect of knowledge and intent across types of transgressions.
Awe has traditionally been considered a religious or spiritual emotion, yet scientists often report that awe motivates them to answer questions about the natural world, and to do so in naturalistic terms. Indeed, awe may be closely related to scientific discovery and theoretical advance. Awe is typically triggered by something vast (either literally or metaphorically) and initiates processes of accommodation, in which existing mental schemas are revised to make sense of the awe‐inspiring stimuli. This process of accommodation is essential for the kind of belief revision that characterizes scientific reasoning and theory change. Across six studies, we find that the tendency to experience awe is positively associated with scientific thinking, and that this association is not shared by other positive emotions. Specifically, we show that the disposition to experience awe predicts a more accurate understanding of how science works, rejection of creationism, and rejection of unwarranted teleological explanations more broadly.
Can science explain romantic love, morality, and religious belief? We documented intuitive beliefs about the limits of science in explaining the human mind. We considered both epistemic evaluations (concerning whether science could possibly fully explain a given psychological phenomenon) and nonepistemic judgments (concerning whether scientific explanations for a given phenomenon would generate discomfort), and we identified factors that characterize phenomena judged to fall beyond the scope of science. Across six studies, we found that participants were more likely to judge scientific explanations for psychological phenomena to be impossible and uncomfortable when, among other factors, they support first-person, introspective access (e.g., feeling empathetic as opposed to reaching for objects), contribute to making humans exceptional (e.g., appreciating music as opposed to forgetfulness), and involve conscious will (e.g., acting immorally as opposed to having headaches). These judgments about the scope of science have implications for science education, policy, and the public reception of psychological science.
Teleological explanations, which appeal to a function or purpose (e.g., “kangaroos have long tails for balance”), seem to play a special role within the biological domain. We propose that such explanations are compelling because they are evaluated on the basis of a salient cue: structure-function fit, or the correspondence between a biological feature’s form (e.g., tail length) and its function (e.g., balance). Across five studies with 843 participants in total, we find support for three predictions that follow from this proposal. First, we find that function information decreases reliance on mechanistic considerations when evaluating explanations (Experiments 1- 3), indicating the presence of a salient, function-based cue. Second, we demonstrate that structure-function fit is the best candidate for this cue (Experiments 3-4). Third, we show that scientifically-unwarranted teleological explanations are more likely to be accepted under speeded and unspeeded conditions when they are high in structure-function fit (Experiment 5). Experiment 5 also finds that structure-function fit extends beyond biology to teleological explanations in other domains. Jointly, these studies provide a new account of how teleological explanations are evaluated and why they are often (but not universally) compelling.