Publications

Publications

Advanced Filters

In Press

Kinney, D., & Lombrozo, T. (2024). Building compressed causal models of the world. Cognitive Psychology, 155, 101682.

A given causal system can be represented in a variety of ways. How do agents determine which variables to include in their causal representations, and at what level of granularity? Using techniques from Bayesian networks, information theory, and decision theory, we develop a formal theory according to which causal representations reflect a trade-off between compression and informativeness, where the optimal trade-off depends on the decision-theoretic value of information for a given agent in a given context. This theory predicts that, all else being equal, agents prefer causal models that are as compressed as possible. When compression is associated with information loss, however, all else is not equal, and our theory predicts that agents will favor compressed models only when the information they sacrifice is not informative with respect to the agent’s anticipated decisions. We then show, across six studies reported here (N=2,364) and one study reported in the supplemental materials (N=182), that participants’ preferences over causal models are in keeping with the predictions of our theory. Our theory offers a unification of different dimensions of causal evaluation identified within the philosophy of science (proportionality and stability), and contributes to a more general picture of human cognition according to which the capacity to create compressed (causal) representations plays a central role.

PDF

2024

Cusimano, C., Zorrilla, N., Danks, D., & Lombrozo, T. (2024). Psychological Freedom, Rationality, and the Naive Theory of Reasoning. Journal of Experimental Psychology: General, 153(3), 837–863. https://doi.org/10.1037/xge0001540: Psychological Freedom, Rationality, and the Naive Theory of Reasoning

To make sense of the social world, people reason about others’ mental states, including whether and in what ways others can form new mental states. We propose that people’s judgments concerning the dynamics of mental state change invoke a “naive theory of reasoning.” On this theory, people conceptualize reasoning as a rational, semi-autonomous process that individuals can leverage, but not override, to form new rational mental states. Across six experiments, we show that this account of people’s naive theory of reasoning predicts judgments about others’ ability to form rational and irrational beliefs, desires, and intentions, as well as others’ ability to act rationally and irrationally. This account predicts when, and explains why, people judge others as psychologically constrained by coercion and other forms of situational pressure.

PDF
Foster-Hanson, E., & Lombrozo, T. (2024). Functional Explanations Link Gender Essentialism and Normativity. In Proceedings of the 46th Annual Meeting of the Cognitive Science Society (pp. 2527-2536).

Why do beliefs that gender differences are innate (i.e., gender essentialism) sometimes lead to normative judgments about how individual people ought to be? In the current study, we propose that a missing premise linking gender essentialism and normativity rests on the common folk-biological assumption that biological features serve a biological function. When participants (N = 289) learned that a novel feature of the gender category “mothers” was common and innate, they overwhelmingly assumed that it must have served some function across human history. When they learned that it served a historical function, they assumed that it must still be beneficial in today’s environment. When participants learned that the feature was beneficial, they judged that contemporary mothers ought to have it, and they were more willing to intervene to ensure that they would by constraining the choices of individual mothers. Thus, we suggest that essentialist assumptions can shape normative social judgments via the explanations people tend to generate about why certain features of natural kind categories become common to begin with. This finding articulates one manifestation of the naturalistic fallacy, with implications for policy debates about bodily autonomy and choice.

PDF

Consider the following two (hypothetical) generic causal claims: “Living in a neighborhood with many families with children increases purchases of bicycles” and “living in an affluent neighborhood with many families with children increases purchases of bicycles.” These claims not only differ in what they suggest about how bicycle ownership is distributed across different neighborhoods (i.e., “the data”), but also have the potential to communicate something about the speakers’ values: namely, the prominence they accord to affluence in representing and making decisions about the social world. Here, we examine the relationship between the level of granularity with which a cause is described in a generic causal claim (e.g., neighborhood vs. affluent neighborhood) and the value of the information contained in the causal model that generates that claim. We argue that listeners who know any two of the following can make reliable inferences about the third: 1) the level of granularity at which a speaker makes a generic causal claim, 2) the speaker’s values, and 3) the data available to the speaker. We present results of four experiments (N = 1323) in the domain of social categories that provide evidence in keeping with these predictions.

PDF

Why were women given the right to vote? “Because it is morally wrong to deny women the right to vote.” This explanation does not seem to fit the typical pattern for explaining an event: rather than citing a cause, it appeals to an ethical claim. Do people judge ethical claims to be genuinely explanatory? And if so, why? In Studies 1 (N = 220) and 2 (N = 293), we find that many participants accept ethical explanations for social change and that this is predicted by their meta-ethical beliefs in moral progress and moral principles, suggesting that these participants treat morality as a directional feature of the world, somewhat akin to a causal force. In Studies 3 (N = 513) and 4 (N = 328), we find that participants recognize this relationship between ethical explanations and meta-ethical commitments, using the former to make inferences about individuals’ beliefs in moral progress and moral principles. Together these studies demonstrate that our beliefs about the nature of morality shape our judgments of explanations and that explanations shape our inferences about others’ moral commitments.

PDF
Lewry, C., Asifriyaz, S., & Lombrozo, T. (2024). Lay theories of moral progress. Cognitive Science, 48(11), e70018.

Many consider the world to be morally better today than it was in the past and expect moral improvement to continue. How do people explain what drives this change? In this paper, we identify two ways people might think about how moral progress occurs: that it is driven by human action (i.e., if people did not actively work to make the world better, moral progress would not occur) or that it is driven by an unspecified mechanism (i.e., that our world is destined to morally improve, but without specifying a role for human action). In Study 1 (N = 147), we find that those who more strongly believe that the mechanism of moral progress is human action are more likely to believe their own intervention is warranted to correct a moral setback. In Study 2 (N = 145), we find that this translates to intended action: those who more strongly believe moral progress is driven by human action report that they would donate more money to correct a moral setback. In Study 3 (N = 297), participants generate their own explanations for why moral progress occurs. We find that participants’ donation intentions are predicted by whether their explanations state that human action drives moral progress. Together, these studies suggest that beliefs about the mechanisms of moral progress have important implications for engaging in social action.

PDF

The Novelty Seeking Model (NSM) places “novelty” at center stage in characterizing the mechanisms behind curiosity. We argue that the NSM's conception of novelty is too broad, obscuring distinct constructs. More critically, the NSM underemphasizes triggers of curiosity that better unify these constructs and that have received stronger empirical support: those that signal the potential for useful learning.

PDF
Lombrozo, T. (2024). Learning by thinking in natural and artificial minds. Trends in Cognitive Sciences, 28(11), 1011-1022.

Canonical cases of learning involve novel observations external to the mind, but learning can also occur through mental processes such as explaining to oneself, mental simulation, analogical comparison, and reasoning. Recent advances in artificial intelligence (AI) reveal that such learning is not restricted to human minds: artificial minds can also self-correct and arrive at new conclusions by engaging in processes of 'learning by thinking' (LbT). How can elements already in the mind generate new knowledge? This article aims to resolve this paradox, and in so doing highlights an important feature of natural and artificial minds – to navigate uncertain environments with variable goals, minds with limited resources must construct knowledge representations 'on demand'. LbT supports this construction.

PDF
Modrek, A. S., & Lombrozo, T. (2024). Allow Me to Explain: Benefits of Explaining Extend to Distal Academic Performance. Cognitive Science, 48(9), e13496.

How does the act of explaining influence learning? Prior work has studied effects of explaining through a predominantly proximal lens, measuring short-term outcomes or manipulations within lab settings. Here, we ask whether the benefits of explaining extend to academic performance over time. Specifically, does the quality and frequency of student explanations predict students’ later performance on standardized tests of math and English? In Study 1 (N = 127 5th−6th graders), participants completed a causal learning activity during which their explanation quality was evaluated. Controlling for prior test scores, explanation quality directly predicted both math and English standardized test scores the following year. In Study 2 (N = 20,384 10th graders), participants reported aspects of teachers’ explanations and their own. Controlling for prior test scores, students’ own explanations predicted both math and English state standardized test scores, and teacher explanations were linked to test performance through students’ own explanations. Taken together, these findings suggest that benefits of explaining may result in part from the development of a metacognitive explanatory skill that transfers across domains and over time. Implications for cognitive science, pedagogy, and education are discussed

PDF
Oktar, K., Byers, B., & Lombrozo, T. (2024). Are disagreements just differences in beliefs?. In Proceedings of the 46th Annual Meeting of the Cognitive Science Society (pp. 2527-2536).

Decades of research have examined the consequences of disagreement, both negative (harm to relationships) and positive (fostering learning opportunities). Yet the psychological mechanisms underlying disagreement judgments themselves are poorly understood. Much research assumes that disagreement tracks divergence: the difference between two individuals’ beliefs with respect to a proposition. We test divergence as a theory of interpersonal disagreement through two experiments (N = 60, N = 60) and predictive models. Our data and modeling show that judgments of disagreement track divergence, but also the direction and extremity of beliefs. Critically, disagreement judgments track key social judgments (e.g., inferences of warmth, competence, and bias) above and beyond divergence, with notable variation across domains.

PDF
Oktar, K., Sucholutsky, I., Lombrozo, T., & Griffiths, T. (2024). Dimensions of Disagreement: Divergence and Misalignment in Cognitive Science and Artificial Intelligence. Decision.

Our understanding of disagreement is rooted in psychological studies of human behavior, which typically cast disagreement as divergence: two agents forming diverging evaluations of the same object. Recent work in artificial intelligence highlights how disagreement can also arise from misalignment in how agents represent that object. Here, we formally describe these two dimensions of disagreement, clarify the relationship between them, and argue that strategies for conflict resolution and collaboration are likely to be ineffective (or even backfire) if they do not consider misalignment in representations. Moreover, we identify how taking misalignment into account can enrich current research on judgment and decision making, from biased advice taking to algorithm aversion, and discuss implications for artificial intelligence research.

PDF
Oktar, K., Lombrozo, T., & Griffiths, T. (2024). Learning From Aggregated Opinion. Psychological Science, 35(9), 1010–1024.

The capacity to leverage information from others’ opinions is a hallmark of human cognition. Consequently, past research has investigated how we learn from others’ testimony. Yet a distinct form of social information—aggregated opinion—increasingly guides our judgments and decisions. We investigated how people learn from such information by conducting three experiments with participants recruited online within the United States (N = 886) comparing the predictions of three computational models: a Bayesian solution to this problem that can be implemented by a simple strategy for combining proportions with prior beliefs, and two alternatives from epistemology and economics. Across all studies, we found the strongest concordance between participants’ judgments and the predictions of the Bayesian model, though some participants’ judgments were better captured by alternative strategies. These findings lay the groundwork for future research and show that people draw systematic inferences from aggregated opinion, often in line with a Bayesian solution.

PDF
Vasil, N., Srinivasan, M., Ellwood-Lowe, M. E., Delaney, S., Gopnik, A., & Lombrozo, T. (2024). Structural explanations lead young children and adults to rectify resource inequalities. Journal of Experimental Child Psychology, 242, 105896. https://doi.org/10.1016/j.jecp.2024.105896: Structural explanations lead young children and adults to rectify resource inequalities

Decisions about how to divide resources have profound social and practical consequences. Do explanations regarding the source of existing inequalities influence how children and adults allocate new resources? When 3-6-year-old children (N=201) learned that inequalities were caused by structural forces (stable external constraints affecting access to resources) as opposed to internal forces (effort), they rectified inequalities, overriding previously-documented tendencies to perpetuate inequality or divide resources equally. Adults (N=201) were more likely than children to rectify inequality spontaneously; this was further strengthened by a structural explanation but reversed by an effort-based explanation. Allocation behaviors were mirrored in judgments of which allocation choices by others were appropriate. These findings reveal how explanations powerfully guide social reasoning and action from childhood through adulthood.

PDF
Vesga, A., Van Leeuwen, N., & Lombrozo, T. (2024). Evidence for distinct cognitive attitudes of belief in theory of mind. In Proceedings of the 46th Annual Meeting of the Cognitive Science Society (pp. 2527-2536).

Theory of mind is often referred to as “belief-desire” psychology, as these mental states (belief, desire) are accorded a central role. However, extant research has made it clear that defining the notion of belief or characterizing a consistent set of key characteristics is no trivial task. Across two studies (N=283, N=332), we explore the hypothesis that laypeople make more fine-grained distinctions among different kinds of “belief.” Specifically, we find evidence that beliefs with matching contents are judged differently depending on whether those beliefs are seen as playing predominantly epistemic roles (such as tracking evidence with the aim of forming accurate representations) versus non-epistemic roles (such as social signaling). Beliefs with epistemic aims, compared to those with non-epistemic aims, are more likely to be described with the term “thinks” (vs. “believes”), and to be redescribed in probabilistic (vs. binary) terms. These findings call for a refinement of the concepts posited to underly theory of mind and offer indirect support for the idea that human psychology in fact features more than one kind of belief.

PDF
Vrantsidis, T., & Lombrozo, T. (2024). Inside Ockham’s Razor: A mechanism driving preferences for simpler explanations. Memory & Cognition.

People often prefer simpler explanations, defined as those that posit the presence of fewer causes (e.g., positing the presence of a single cause, Cause A, rather than two causes, Causes B and C, to explain observed effects). Here we test one hypothesis about the mechanisms underlying this preference: that people tend to reason as if they are using “agnostic” explanations, which remain neutral about the presence/absence of additional causes (e.g., comparing “A” vs. “B and C”, while remaining neutral about the status of B and C when considering “A”, or of A when considering “B and C”), even in cases where “atheist” explanations, which specify the absence of additional causes (e.g., “A and not B or C” vs. “B and C and not A”), are more appropriate. Three studies with US-based samples (total N = 982) tested this idea by using scenarios for which agnostic and atheist strategies produce diverging simplicity/complexity preferences, and asking participants to compare explanations provided in atheist form. Results suggest that people tend to ignore absent causes, thus overgeneralizing agnostic strategies, which can produce preferences for simpler explanations even when the complex explanation is objectively more probable. However, these unwarranted preferences were reduced by manipulations that encouraged participants to consider absent causes: making absences necessary to produce the effects (Study 2), or describing absences as causes that produce alternative effects (Study 3). These results shed light on the mechanisms driving preferences for simpler explanations, and on when these mechanisms are likely to lead people astray.

PDF

2023

People often engage in biased reasoning, favoring some beliefs over others even when the result is a departure from impartial or evidence-based reasoning. Psychologists have long assumed that people are unaware of these biases and operate under an “illusion of objectivity.” We identify an important domain of life in which people harbor little illusion about their biases – when they are biased for moral reasons. For instance, people endorse and feel justified believing morally desirable propositions even when they think they lack evidence for them (Study 1a/1b). Moreover, when people engage in morally desirable motivated reasoning, they recognize the influence of moral biases on their judgment, but nevertheless evaluate their reasoning as ideal (Studies 2–4). These findings overturn longstanding assumptions about motivated reasoning and identify a boundary condition on Naïve Realism and the Bias Blind Spot. People’s tendency to be aware and proud of their biases provides both new opportunities, and new challenges, for resolving ideological conflict and improving reasoning.

PDF

Who is more committed to science: the person who learns about a scientific consensus and doesn’t ask questions, or the person who learns about a scientific consensus and decides to pursue further inquiry? Who exhibits greater commitment to religious teachings: the person who accepts doctrine without question, or the person who seeks further evidence and explanations? Across three experiments (N = 801) we investigate the inferences drawn about an individual on the basis of their epistemic behavior – in particular, their decision to pursue or forgo further inquiry (evidence or explanation) about scientific or religious claims. We find that the decision to pursue further inquiry (about science or religion) is taken to signal greater commitment to science and to truth, as well as trustworthiness and good moral character (Studies 1-3). This is true even in the case of claims regarding controversial science topics, such as anthropogenic climate change (Study 3). In contrast, the decision to forgo further inquiry is taken to signal greater commitment to religion, but only when the claim under consideration contains religious content (Study 1-3). These findings shed light on perceived scientific and religious norms in our predominantly American and Christian sample, as well as the rich social inferences drawn on the basis of epistemic behavior.

 

 
PDF

Curiosity plays a key role in directing learning throughout the lifespan. Prior work finds that violations of expectations can be powerful triggers of curiosity in both children and adults, but it is unclear which expectation-violating events induce the greatest curiosity and how this might vary over development. Some theories have suggested a U-shaped function such that stimuli of moderate extremity pique the greatest curiosity. However, expectation-violations vary not only in degree, but in kind: for example, some things violate an intuitive theory (e.g., an alligator that can talk) and others are merely unlikely (e.g., an alligator hiding under your bed). Combining research on curiosity with distinctions posited in the cognitive science of religion, we test whether minimally counterintuitive (MCI) stimuli, which involve one violation of an intuitive theory, are especially effective at triggering curiosity. We presented adults (N = 77) and 4- and 5-year-olds (N = 36) in the United States with stimuli that were ordinary, unlikely, MCI, and very counterintuitive (VCI) and asked which one they would like to learn more about. Adults and 5-year-olds chose Unlikely over Ordinary and MCI over Unlikely, but not VCI over MCI, more often than chance. Our results suggest that (i) minimally counterintuitive stimuli trigger greater curiosity than merely unlikely stimuli, (ii) surprisingness has diminishing returns, and (iii) sensitivity to surprisingness increases with age, appearing in our task by age 5.

 

PDF

Adults in prior work often endorse explanations appealing to purposes (e.g., “pencils exist so people can write with them”), even when these ‘teleological’ explanations are scientifically unwarranted (e.g., “water exists so life can survive on Earth”). We explore teleological endorsement in a novel domain—human purpose—and its relationship to moral judgments. Across studies conducted online with a sample of US-recruited adults, we ask: (1) Do participants believe the human species exists for a purpose? (2) Do these beliefs predict moral condemnation of individuals who fail to fulfill this purpose? And (3) what explains the link between teleological beliefs and moral condemnation? Study 1 found that participants frequently endorsed teleological claims about humans existence (e.g., humans exist to procreate), and these beliefs correlated with moral condemnation of purpose violations (e.g., condemning those who do not procreate). Study 2 found evidence of a bi-directional causal relationship: stipulating a species’ purpose results in moral condemnation of purpose violations, and stipulating that an action is immoral increases endorsement that the species exists for that purpose. Study 3 found evidence that when participants believe a species exists to perform some action, they infer this action is good for the species, and this in turn supports moral condemnation of individuals who choose not to perform the action. Study 4 found evidence that believing an action is good for the species partially mediates the relationship between human purpose beliefs and moral condemnation. These findings shed light on how our descriptive understanding can shape our prescriptive judgments.

 
PDF
Lombrozo, T., & Liquin, E. G. (2023). Explanation is effective because it is selective. Current Directions in Psychological Science, 32(3), 212-219. https://doi.org/10.1177/09637214231156106: Explanation is effective because it is selective

Humans are avid explainers: we ask “why?” and derive satisfaction from a good answer. But humans are also selective explainers: only some observations prompt us to ask “why?”, and only some answers are satisfying. This article reviews recent work on selectivity in explanation-seeking curiosity and explanatory satisfaction, with a focus on how this selectivity makes us effective learners in a complex world. Research finds that curiosity about the answer to a why-question is stronger when it is expected to yield useful learning, and that explanations are judged more satisfying when they are perceived to support useful learning. While such perceptions are imperfect, there is nonetheless evidence that seeking and evaluating explanations – in the selective way humans do – can play an important role in learning.

PDF

A growing body of research suggests that scientific and religious beliefs are often held and justified in different ways. In three studies with 707 participants, we examine the distinctive profiles of beliefs in these domains. In Study 1, we find that participants report evidence and explanatory considerations (making sense of things) as dominant reasons for beliefs across domains. However, cuing the religious domain elevates endorsement of non-scientific justifications for belief, such as ethical considerations (e.g., believing it encourages people to be good), affiliation (what loved ones believe) and intuition (what feels true in one’s heart). Study 2 replicates these differences with specific scientific and religious beliefs held with equal confidence, and documents further domain differences in beliefs’ personal importance, openness to revision, and perceived objectivity. Study 3 replicates these differences, further finding that counter-consensus beliefs about contentious science topics (such as climate change and vaccination) often have properties resembling religious beliefs, while counter-religious beliefs about religion (e.g. “There is no God”) have properties that more closely resemble beliefs about science. We suggest that beliefs are held and justified within coherent epistemic frameworks, with individuals using different frameworks in different contexts and domains.

PDF

What changes people’s judgments on moral issues, such as the ethics of abortion or eating meat? On some views, moral judgments result from deliberation, such that reasons and reasoning should be primary drivers of moral change. On other views, moral judgments reflect intuition, with reasons offered as post-hoc rationalizations. We test predictions of these accounts by investigating whether exposure to a moral philosophy course (vs. control courses) changes moral judgments, and if so, via what mechanism(s). In line with deliberative accounts of morality, we find that exposure to moral philosophy changes moral views. In line with intuitionist accounts, we find that the mechanism of change is reduced reliance on intuition, not increased reliance on deliberation; in fact, deliberation is related to increased confidence in judgments, not change. These findings suggest a new way to reconcile deliberative and intuitionist accounts: Exposure to reasons and evidence can change moral views, but primarily by discounting intuitions.

 

 
PDF
Davoodi, T., & Lombrozo, T. (2023). Scientific and Religious Explanations, Together and Apart. In J. Schupbach & D. Glass (Eds.), Conjunctive Explanations (pp. 219-245). Routledge. https://doi.org/10.4324/9781003184324-13: Scientific and Religious Explanations, Together and Apart

Scientific and religious explanations often coexist in the sense that they are both endorsed by the same individuals, and they are sometimes conjoined such that a single explanation draws upon both scientific and religious components. In this chapter we consider the psychology of such explanations, drawing upon recent research in cognitive and social psychology. We argue that scientific and religious explanations often serve different psychological functions, with scientific explanations seen as better serving epistemic functions (such as supporting accurate models of the world), and religious explanations seen as better serving non-epistemic functions (such as offering emotional comfort or supporting moral behavior). This functional differentiation points to a potential benefit of conjunctive explanations: by fulfilling multiple psychological functions, they will sometimes satisfy a broader range of explanatory goals. Generalizing from the case of science and religion, we suggest that conjunctive explanations may be especially appealing when a given explanatory framework faces tradeoffs between different explanatory goals (such as generality versus precision), resulting in an advantage to explanations that draw upon multiple explanatory frameworks instantiating different tradeoffs.

PDF
Van Leeuwen, N., & Lombrozo, T. (2023). The Puzzle of Belief. Cognitive Science, 47(2), e13245. https://doi.org/10.1111/cogs.13245: The Puzzle of Belief

Abstract The notion of belief appears frequently in cognitive science. Yet it has resisted definition of the sort that could clarify inquiry. How then might a cognitive science of belief proceed? Here we propose a form of pluralism about believing. According to this view, there are importantly different ways to believe an idea. These distinct psychological kinds occur within a multi-dimensional property space, with different property clusters within that space constituting distinct varieties of believing. We propose that discovering such property clusters is empirically tractable, and that this approach can help sidestep merely verbal disputes about what constitutes “belief.”

PDF

2022

Blanchard, T., Murray, D., & Lombrozo, T. (2022). Experiments on Causal Exclusion. Mind and Language, 37(5), 1067-1089. https://doi.org/10.1111/mila.12343: Experiments on Causal Exclusion.

Intuitions play an important role in the debate on the causal status of high-level properties. For instance, Kim has claimed that his “exclusion argument” relies on “a perfectly intuitive ... understanding of the causal relation.” We report the results of three experiments examining whether laypeople really have the relevant intuitions. We find little support for Kim's view and the principles on which it relies. Instead, we find that laypeople are willing to count both a multiply realized property and its realizers as causes, and regard the systematic overdetermination implied by this view as unproblematic.

 

 

PDF

Identifying abstract relations is essential for commonsense reasoning. Research suggests that even young children can infer relations such as “same” and “different,” but often fail to apply these concepts. Might the process of explaining facilitate the recognition and application of relational concepts? Based on prior work suggesting that explanation can be a powerful tool to promote abstract reasoning, we predicted that children would be more likely to discover and use an abstract relational rule when they were prompted to explain observations instantiating that rule, compared to when they received demonstration alone. Five- and 6-year-olds were given a modified Relational Match to Sample (RMTS) task, with repeated demonstrations of relational (same) matches by an adult. Half of the children were prompted to explain these matches; the other half reported the match they observed. Children who were prompted to explain showed immediate, stable success, while those only asked to report the outcome of the pedagogical demonstration did not. Findings provide evidence that explanation facilitates early abstraction over and above demonstration alone.

 

PDF

How did the universe come to exist? What happens after we die? Answers to existential questions tend to elicit both scientific and religious explanations, offering a unique opportunity to evaluate how these domains differ in their psychological roles. Across 3 studies (N = 1,647), we investigate whether (and by whom) scientific and religious explanations are perceived to have epistemic merits—such as evidential and logical support—versus nonepistemic merits—such as social, emotional, or moral benefits. We find that scientific explanations are attributed more epistemic merits than are religious explanations (Study 1), that an explanation’s perceived epistemic merits are more strongly predicted by endorsement of that explanation for science than for religion (Study 2), and that scientific explanations are more likely to be generated when participants are prompted for an explanation high in epistemic merits (Study 3). By contrast, we find that religious explanations are attributed more nonepistemic merits than are scientific explanations (Study 1), that an explanation’s perceived nonepistemic merits are more strongly predicted by endorsement of that explanation for religion than for science (Study 2), and that religious explanations are more likely to be generated when participants are prompted for an explanation high in nonepistemic merits (Study 3). These findings inform theories of the relationship between religion and science, and they provide insight into accounts of the coexistence of scientific and religious cognition. (PsycInfo Database Record (c) 2021 APA, all rights reserved)

 

PDF

How and why does the moon cause the tides? How and why does God answer prayers? For many, the answer to the former question is unknown; the answer to the latter question is a mystery. Across three studies testing a largely Christian sample within the United States (N = 2,524), we investigate attitudes towards ignorance and inquiry as a window onto scientific versus religious belief. In Experiment 1, we find that science and religion are associated with different forms of ignorance: scientific ignorance is typically expressed as a personal unknown (“it’s unknown to me”), whereas religious ignorance is expressed as a universal mystery (“it’s a mystery”), with scientific unknowns additionally regarded as more viable and valuable targets for inquiry. In Experiment 2, we show that these forms of ignorance are differentially associated with epistemic goals and norms: expressing ignorance in the form of “unknown” (versus “mystery”) more strongly signals epistemic values and achievements. Experiments 2 and 3 additionally show that ignorance is perceived to be a greater threat to science and scientific belief than to religion and religious belief. Together, these studies shed light on the psychological roles of scientific and religious belief in human cognition.

 

PDF

Curiosity is considered essential for learning and sustained engagement, yet stimulating curiosity in educational contexts remains a challenge. Can people’s curiosity about a scientific topic be stimulated by providing evidence that knowledge about the topic has potential value to society? Here, we show that increasing perceptions of ‘social usefulness’ regarding a scientific topic also increases curiosity and subsequent information search. Our results also show that simply presenting interesting facts is not enough to influence curiosity, and that people are more likely to be curious about a scientific topic if they perceive it to be useful personally and socially. Given the link between curiosity and learning, these results have important implications for science communication and education more broadly.

 

PDF
Foster-Hanson, E., & Lombrozo, T. (2022). What are Men and Mothers for? The Causes and Consequences of Functional Reasoning about Social Categories. In Proceedings of the 44th Annual Meeting of the Cognitive Science Society (pp. 824-832).

Do people attribute functions to gendered social categories? (For instance, is there something men or mothers are for?) And if so, do such attributions of function have consequences for normative judgments about what members of these social categories ought to do? In the current study, participants (N = 366) rated their agreement with 15 statements about the “true functions” of different social categories, in triads of matched masculine, feminine, and superordinate categories (e.g., fathers, mothers, and parents). Participants endorsed functional claims more for some social categories (e.g., parents) than others (e.g., kids), and their background beliefs about gender predicted variation in functional reasoning. However, across categories, participants judged that fulfilling true functions was ‘natural’ for members of the category, and they judged that category members ought to fulfill their true functions.

PDF

Knowing which features are frequent among a biological kind (e.g., that most zebras have stripes) shapes people’s representations of what category members are like (e.g., that typical zebras have stripes) and normative judgments about what they ought to be like (e.g., that zebras should have stripes). In the current work, we ask if people’s inclination to explain why features are frequent is a key mechanism through which what “is” shapes beliefs about what “ought” to be. Across four studies (N = 591), we find that frequent features are often explained by appeal to feature function (e.g., that stripes are for camouflage), that functional explanations in turn shape judgments of typicality, and that functional explanations and typicality both predict normative judgments that category members ought to have functional features. We also identify the causal assumptions that license inferences from feature frequency and function, as well as the nature of the normative inferences that are drawn: by specifying an instrumental goal (e.g., camouflage), functional explanations establish a basis for normative evaluation. These findings shed light on how and why our representations of how the natural world is shape our judgments of how it ought to be.

 

 

 

PDF
Giffin, C., & Lombrozo, T. (2022). Mens Rea in Moral Judgment and Criminal Law. In M. Vargas & J. Doris (Eds.), Oxford Handbook of Moral Psychology. Oxford University Press.
PDF
Kinney, D., & Lombrozo, T. (2022). Evaluations of Causal Claims Reflect a Trade-Off Between Informativeness and Compression. In Proceedings of the 44th Annual Meeting of the Cognitive Science Society (pp. 621-627).

The same causal system can be accurately described in many ways. What governs the evaluation of these choices? We pro- pose a novel, formal account of causal evaluation according to which evaluations of causal claims reflect the joint demands of maximal informativeness and maximal compression. Across two experiments, we show that evaluations of more and less compressed causal claims are sensitive to the amount of information lost by choosing the more compressed causal claim over a less compressed one, regardless of whether the com- pression is realized by coarsening a single variable or by eliding a background condition. This offers a unified account of two dimensions along which causal claims are evaluated (proportionality and stability), and contributes to a more general picture of human cognition according to which the capacity to create compressed (causal) representations plays a central role.

 

PDF
Lewry, C., & Lombrozo, T. (2022). Ethical Explanations. In Proceedings of the 44th Annual Meeting of the Cognitive Science Society (pp. 42-48).

“Slavery ended in the United States because slavery is morally wrong.” This explanation does not seem to fit the typical criteria for explaining an event, since it appeals to ethics rather than causal factors as the reason for this social change. But do people perceive these ethical claims as explanatory, and if so, why? In Study 1, we find that people accept ethical explanations for social change and that this is predicted by their meta-ethical beliefs in moral progress and moral objectivism, suggesting that they treat morality somewhat akin to a causal force. In Study 2, we find that people recognize this relationship between ethical explanations and meta-ethical commitments, using the former to make inferences about individuals’ beliefs in moral progress and objectivism. Together these studies demonstrate that our moral commitments shape our judgments of explanations and that explanations shape our moral inferences about others.

PDF

Many explanations have a distinctive, positive phenomenology: receiving or generating these explanations feels satisfying. Accordingly, we might expect this feeling of explanatory satisfaction to reinforce and motivate inquiry. Across five studies, we investigate how explanatory satisfaction plays this role: by motivating and reinforcing inquiry quite generally (“brute motivation” account), or by selectively guiding inquiry to support useful learning about the target of explanation (“aligned motivation” account). In Studies 1–2, we find that satisfaction with an explanation is related to several measures of perceived useful learning, and that greater satisfaction in turn predicts stronger curiosity about questions related to the explanation. However, in Studies 2–4, we find only tenuous evidence that satisfaction is related to actual learning, measured objectively through multiple-choice or free recall tests. In Study 4, we additionally show that perceptions of learning fully explain one seemingly specious feature of explanatory preferences studied in prior research: the preference for uninformative “reductive” explanations. Finally, in Study 5, we find that perceived learning is (at least in part) causally responsible for feelings of satisfaction. Together, these results point to what we call the “imperfectly aligned motivation” account: explanatory satisfaction selectively motivates inquiry towards learning explanatory information, but primarily through fallible perceptions of learning. Thus, satisfaction is likely to guide individuals towards lines of inquiry that support perceptions of learning, whether or not individuals actually are learning.

 

PDF
Oktar, K., & Lombrozo, T. (2022). Mechanisms of Belief Persistence in the Face of Societal Disagreement. In Proceedings of the 44th Annual Meeting of the Cognitive Science Society (pp. 1277-1283).

People have a remarkable ability to remain steadfast in their beliefs in the face of large-scale disagreement. This has important consequences (e.g., societal polarization), yet its psychological underpinnings are poorly understood. In this paper, we answer foundational questions regarding belief persistence, from its prevalence to variability. Across two Experiments (N = 356, N = 354), we find that participants are aware of societal disagreement about controversial issues, yet overwhelmingly (~85%) do not question their views if asked to reflect on this disagreement. Both studies provide evidence that explanations for persistence vary across domains, with epistemic and meta-epistemic explanations among the most prevalent.

 

PDF

Deliberative analysis enables us to weigh features, simulate futures, and arrive at good, tractable decisions. So why do we so often eschew deliberation, and instead rely on more intuitive, gut responses? We propose that intuition might be prescribed for some decisions because people’s folk theory of decision-making accords a special role to authenticity, which is associated with intuitive choice. Five pre-registered experiments find evidence in favor of this claim. In Experiment 1 (N = 654), we show that participants prescribe intuition and deliberation as a basis for decisions differentially across domains, and that these prescriptions predict reported choice. In Experiment 2 (N = 555), we find that choosing intuitively vs. deliberately leads to different inferences concerning the decision-maker’s commitment and authenticity—with only inferences about the decision-maker’s authenticity showing variation across domains that matches that observed for the prescription of intuition in Experiment 1. In Experiment 3 (N = 631), we replicate our prior results and rule out plausible confounds. Finally, in Experiment 4 (N = 177) and Experiment 5 (N = 526), we find that an experimental manipulation of the importance of authenticity affects the prescribed role for intuition as well as the endorsement of expert human or algorithmic advice. These effects hold beyond previously recognized influences on intuitive vs. deliberative choice, such as computational costs, presumed reliability, objectivity, complexity, and expertise.

 

PDF

Are causal explanations (e.g., “she switched careers because of the COVID pandemic”) treated differently from the corresponding claims that one factor caused another (e.g., “the COVID pandemic caused her to switch careers”)? We examined whether explanatory and causal claims diverge in their responsiveness to two different types of information: covariation strength and mechanism information. We report five experiments with 1,730 participants total, showing that compared to judgments of causal strength, explanatory judgments tend to be more sensitive to mechanism and less sensitive to covariation – even though explanatory judgments respond to both types of information. We also report exploratory comparisons to judgments of understanding, and discuss implications of our findings for theories of explanation, understanding, and causal attribution. These findings shed light on the potentially unique role of explanation in cognition.

 

PDF

Explanations highlight inductively rich relationships that support further generalizations: if a knife is sharp because it is for cutting, we can infer that other things for cutting might also be sharp. Do children see explanations as good guides to generalization? We asked 108 4- to 7-year-old children to evaluate mechanistic, functional, and categorical explanations of object properties, and to generalize those properties to novel objects on the basis of shared mechanisms, functions, or category membership. Children were significantly more likely to generalize when the explanation they had received matched the subsequent basis for generalization (e.g., generalizing on the basis of a shared mechanism after hearing a mechanistic explanation). This effect appeared to be driven by older children. Explanation-to-generalization coordination also appeared to vary across relationships, mirroring the development of corresponding explanatory preferences. These findings fill an important gap in our understanding of how explanations guide young children’s generalization and learning.

 

PDF

People often face the challenge of evaluating competing explanations. One approach is to assess the explanations’ relative probabilities – e.g., applying Bayesian inference to compute their posterior probabilities. Another approach is to consider an explanation’s qualities or ‘virtues’, such as its relative simplicity (i.e., the number of unexplained causes it invokes). The current work investigates how these two approaches are related. Study 1 found that simplicity is used to infer the inputs to Bayesian inference (explanations’ priors and likelihoods). Studies 1 and 2 found that simplicity is also used as a direct cue to the outputs of Bayesian inference (the posterior probability of an explanation), such that simplicity affects estimates of posterior probability even after controlling for elicited (Study 1) or provided (Study 2) priors and likelihoods, with simplicity having a larger effect in Study 1, where posteriors are more uncertain and difficult to compute. Comparing Studies 1 and 2 also suggested that simplicity plays additional roles unrelated to approximating probabilities, as reflected in simplicity’s effect on how ‘satisfying’ (vs. probable) an explanation is, which remained largely unaffected by the difficulty of computing posteriors. Together, these results suggest that the virtue of simplicity is used in multiple ways to approximate probabilities (i.e., serving as a cue to priors, likelihoods, and posteriors) when these probabilities are otherwise uncertain or difficult to compute, but that the influence of simplicity also goes beyond these roles.

2021

When faced with a dilemma between believing what is supported by an impartial assessment of the evidence (e.g., that one's friend is guilty of a crime) and believing what would better fulfill a moral obligation (e.g., that the friend is innocent), people often believe in line with the latter. But is this how people think beliefs ought to be formed? We addressed this question across three studies and found that, across a diverse set of everyday situations, people treat moral considerations as legitimate grounds for believing propositions that are unsupported by objective, evidence-based reasoning. We further document two ways in which moral considerations affect how people evaluate others' beliefs. First, the moral value of a belief affects the evidential threshold required to believe, such that morally beneficial beliefs demand less evidence than morally risky beliefs. Second, people sometimes treat the moral value of a belief as an independent justification for belief, and on that basis, sometimes prescribe evidentially poor beliefs to others. Together these results show that, in the folk ethics of belief, morality can justify and demand motivated reasoning.

PDF
Cusimano, C., Zorrilla, N. C., Danks, D., & Lombrozo, T. (2021). Reason-based constraint in theory of mind. In Proceedings of the 43rd Annual Conference of the Cognitive Science Society (pp. 105-111).

In the face of strong evidence that a coin landed heads, can someone simply choose to believe it landed tails? Knowing that a large earthquake could result in personal tragedy, can someone simply choose to desire that it occur? We propose that in the face of strong reasons to adopt a given belief or desire, people are perceived to lack control: they cannot simply believe or desire otherwise. We test this “reason-based constraint” account of mental state change, and find that people reliably judge that evidence constrains belief formation, and utility constrains desire formation, in others. These results were not explained by a heuristic that simply treats irrational mental states as impossible to adopt intentionally. Rather, constraint results from the perceived influence of reasons on reasoning: people judge others as free to adopt irrational attitudes through actions that eliminate their awareness of strong reasons. These findings fill an important gap in our understanding of folk psychological reasoning, with implications for attributions of autonomy and moral responsibility.

 

PDF

Scientific reasoning is characterized by commitments to evidence and objectivity. New research suggests that under some conditions, people are prone to reject these commitments, and instead sanction motivated reasoning and bias. Moreover, people’s tendency to devalue scientific reasoning likely explains the emergence and persistence of many biased beliefs. However, recent work in epistemology has identified ways in which bias might be legitimately incorporated into belief formation. Researchers can leverage these insights to evaluate when commonsense affirmation of bias is justified and when it is unjustified and therefore a good target for intervention. Making reasoning more scientific may require more than merely teaching people what constitutes scientific reasoning; it may require affirming the value of such reasoning in the first place.

 

PDF

Our actions and decisions are regularly influenced by the social environment around us. Can socialcues be leveraged to induce curiosity and affect subsequent behavior? Across two experiments, we show that curiosity is contagious: the social environment can influence people’s curiosity about the answers to scientific questions. Participants were presented with everyday questions about science from a popular on-line forum, and these were shown with a high or low number of up-votes as a social cue to popularity. Participants indicated their curiosity about the answers, and they were given an opportunity to reveal a subset of those answers. Participants reported greater curiosity about the answers to questions when the questions were presented with a high (vs. low) number of up-votes, and they were also more likely to choose to reveal the answers to questions with a high (vs. low) number of up-votes. These effects were partially mediated by surprise and by the inferredusefulness of knowledge, with a more dramatic effect of low up-votes in reducing curiosity than of high up-votes in boosting curiosity. Taken together, these results highlight the important role social information plays in shaping our curiosity.

 

PDF
Foster-Hanson, E., & Lombrozo, T. (2021). The function of function: People use teleological information to predict prevalence. In Proceedings of the 43rd Annual Meeting of the Cognitive Science Society (pp. 3464-3465).

Folk-biological concepts are sensitive to both statistical information about feature prevalence (Hampton, 1995; Kim & Murphy, 2011; Rosch & Mervis, 1975) and teleological beliefs about function (Atran, 1995; Keil, 1994; Kelemen, Rottman, & Seston, 2013; Lombrozo & Rehder, 2012), but it is unknown how these two types of information interact to shape concepts. In three studies (N = 438) using novel animal kinds, we found that information about prevalence and teleology inform each other: People assume that common features are functional, and they assume that functional features are common. However, people use teleological information to predict the future distribution of features across the category, despite conflicting information about current prevalence. Thus, both information about prevalence and teleological beliefs serve important conceptual functions: Prevalence information encodes the current state of the category, while teleological functions provides a means of predicting future category change.

 

PDF
Gruber, J., Mendle, J., Lindquist, K. A., Schmader, T., Clark, L. A., Bliss-Moreau, E., Akinola, M., Atlas, L., Barch, D. M., Barrett, L. F., Borelli, J. L., Brannon, T. N., Bunge, S. A., Campos, B., Cantlon, J., Carter, R., Carter-Sowell, A. R., Chen, S., Craske, M. G., … Williams, L. A. (2021). The Future of Women in Psychological Science. Perspectives on Psychological Science, 16(3), 483-516. https://doi.org/10.1177/1745691620952789: The Future of Women in Psychological Science

There has been extensive discussion about gender gaps in representation and career advancement in the sciences. However, psychological science itself has yet to be the focus of discussion or systematic review, despite our field’s investment in questions of equity, status, well-being, gender bias, and gender disparities. In the present article, we consider 10 topics relevant for women’s career advancement in psychological science. We focus on issues that have been the subject of empirical study, discuss relevant evidence within and outside of psychological science, and draw on established psychological theory and social-science research to begin to chart a path forward. We hope that better understanding of these issues within the field will shed light on areas of existing gender gaps in the discipline and areas where positive change has happened, and spark conversation within our field about how to create lasting change to mitigate remaining gender differences in psychological science.

PDF

People often endorse explanations that appeal to purpose, even when these ‘teleological’ explanations are scientifically unwarranted (e.g., “water exists so that life can survive on Earth”). In the present research, we explore teleological endorsement in a novel domain—human purpose—and its relationship to moral judgments. Across four studies, we address three questions: (1) Do people believe the human species exists for a purpose? (2) Do these beliefs drive moral condemnation of individuals who fail to fulfill this purpose? And if so, (3) what explains the link between teleological beliefs and moral condemnation? Study 1 (N=188) found that many adults endorsed anthropic teleology (e.g., that humans exist in order to procreate), and that these beliefs correlated with moral condemnation of purpose violations (e.g., judging those who do not procreate immoral). Study 2 (N=199) found evidence of a bi-directional causal relationship: teleological claims about a species resulted in moral condemnation of purpose violations, and stipulating that an action is immoral increased judgments that the species exists for that purpose. Study 3 (N=94) replicated a causal effect of species-level purpose on moral condemnation with novel actions and more implicit character judgments. Study 4 (N=52) found that when a species is believed to exist to perform some action, participants infer that the action is good for the species, and that this belief in turn supports moral condemnation of individuals who choose not to perform the action. Together, these findings shed light on how our descriptive understanding can shape our prescriptive judgments.

 

PDF
Liquin, E. G., Callaway, F., & Lombrozo, T. (2021). Developmental Change in What Elicits Curiosity. In Proceedings of the 43rd Annual Conference of the Cognitive Science Society (pp. 1360-1366).

Across the lifespan, humans direct their learning towards information they are curious to know. However, it is unclear what elicits curiosity, and whether and how this changes across development. Is curiosity triggered by surprise and uncertainty, as prior research suggests, or by expected learning, which is often confounded with these features? In the present research, we use a Bayesian reinforcement learning model to quantify and disentangle surprise, uncertainty, and expected learning. We use the resulting model-estimated features to predict curiosity ratings from 5- to 9-year-olds and adults in an augmented multi-armed bandit task. Like adults’ curiosity, children’s curiosity was best predicted by expected learning. However, after accounting for expected learning, children (but not adults) were also more curious when uncertainty was higher and surprise lower. This research points to developmental changes in what elicits curiosity and calls for a reexamination of research that confounds these elicitors.

 

PDF
Lombrozo, T., Knobe, J., & Nichols, S. (Eds.). (2021). Oxford Studies in Experimental Philosophy: Volume 4. Oxford, UK: Oxford University Press.
Oktar, K., & Lombrozo, T. (2021). Deciding to be Authentic: Intuition is Favored Over Deliberation for Self-Reflective Decisions. In Proceedings of the 43rd Annual Conference of the Cognitive Science Society (pp. 562-568).

People think they ought to make some decisions on the basis of deliberative analysis, and others on the basis of intuitive, gut feelings. What accounts for this variation in people’s preferences for intuition versus deliberation? We propose that intuition might be prescribed for some decisions because people’s folk theory of decision-making accords a special role to authenticity, where authenticity is uniquely associated with intuitive choice. Two pre-registered experiments find evidence in favor of this claim. In Experiment 1 (N=631), we find that decisions made on the basis of intuition (vs. deliberation) are more likely to be judged authentic, especially in domains where authenticity is plausibly valued. In Experiment 2 (N=177), we find that people are more likely to prescribe intuition as a basis for choice when the value of authenticity is heightened experimentally. These effects hold beyond previously recognized influences, such as computational costs, presumed efficacy, objectivity, complexity, and expertise.

 

PDF