Iep Evidentialism Essay

Epistemic Consequentialism

Consequentialism is the view that, in some sense, rightness is to be understood in terms of conduciveness to goodness. Much of the philosophical discussion concerning consequentialism has focused on moral rightness or obligation or normativity. But there is plausibly also epistemic rightness, epistemic obligation, and epistemic normativity. Epistemic rightness is often denoted with talk of justification, rationality, or by merely indicating what should be believed. For example, my belief that I have hands is justified, while my belief that I will win the lottery is not; Alice’s total belief state is rational, while Lucy’s is not; we all should be at least as confident in p or q as we are in p. The epistemic consequentialist claims, roughly, that these kinds of facts about epistemic rightness depend solely on facts about the goodness of the consequences. In slogan form, such a view holds that the epistemic good is prior to the epistemic right.

Many epistemologists seem to have sympathy for the basic idea behind epistemic consequentialism, because many epistemologists have been attracted to the idea that epistemic norms that describe appropriate belief-forming behavior ultimately earn their keep by providing us with some means to garner what is often thought to be the epistemic good of accurate beliefs. Consequentialist thinking has also gained popularity among more formally minded epistemologists, who apply the tools of decision theory to argue in consequentialist fashion for various epistemic norms. And there is also a consequentialist strand in certain areas of philosophy of science, especially those areas that attempt to explain how it is that science as a whole might have considerable epistemic success even if individual scientists are acting irrationally. Thus, there is a kind of prima facie plausibility to epistemic consequentialism.

Table of Contents

  1. Consequentialism
  2. Final Value and Veritism
  3. Consequentialist Theories
    1. A Simple Example
    2. Cognitive Decision Theory
    3. Accuracy First
    4. Traditional Epistemology: Justification
      1. Coherentism
      2. Reliabilism
      3. Evidentialism
    5. Traditional Epistemology Not Concerned with Justification
    6. Social Epistemology
    7. Philosophy of Science
      1. Group versus Individual Rationality
      2. Why Gather Evidence?
  4. Summing Up: Some Useful Distinctions
  5. Objections to Epistemic Consequentialism
    1. Epistemic Trade-Offs
    2. Positive Epistemic Duties
    3. Lottery Beliefs
  6. References and Further Reading

1. Consequentialism

There is unfortunately no consensus about what precisely makes a theory a consequentialist theory. Sometimes it is said that the consequentialist understand the right in terms of the good. Somewhat more generally, but still imprecisely, we could say that the consequentialist maintains that normative facts about Xs (for example, facts about the rightness of actions) depend solely on facts about the value of the consequences of Xs. In light of this, some see consequentialism as a reductive thesis: it purports to reduce normative facts (for instance, about what one ought to do) to evaluative facts of a certain sort (for instance, about what is good). Smith (2009) and others, however, mark what is distinctive about consequentialism differently. Some maintain that a consequentialist is committed to understanding what is right or obligatory in terms of what will maximize value (Smart and Williams 1973, Pettit 2000, Portmore 2007). Still others maintain that a consequentialist is one who is committed to only agent-neutral, rather than agent-relative prescriptions (where an example of an agent-relative prescription is one that instructs each person S to ensure that S not lie, whereas an agent-neutral prescription instructs each person S to minimize lying) (McNaughton and Rawling 1991). And finally, some maintain that what is distinctive about consequentialism is the lack of intrinsic constraints on action types (Nozick 1974, Nagel 1986, Kagan 1997).

Perhaps the best way to elucidate consequentialism, then, is to point to paradigm cases of consequentialist theories and attempt to generalize from them. On this score there is some agreement: classic hedonic utilitarianism (of the sort defended by Bentham and Mill) is thought to be a clear instance of a consequentialist theory. That theory maintains that an action is morally right if and only if the total sum of pleasure minus pain that results from that action exceeds the total sum of pleasure minus pain of any alternative to that action. The normative facts here are facts about the moral rightness of actions and the utilitarian claims that these facts depend solely on facts about the moral goodness of the consequences of actions, where moral goodness is measured by summing up total pleasure minus total pain.

Though it is not possible to give an uncontroversial set of necessary and sufficient conditions for a theory being a species of consequentialism, it is useful to see that there is some sort of unity to views, such as hedonic utilitarianism, normally classified as consequentialist. The following three-step “recipe” for a consequentialist theory evinces this unity, and will be useful to refer to later. (A similar recipe is given by Berker 2013a,b.)

Step 1. Final Value: identify what has final value, where something has final value iff it is valuable for its own sake (sometimes the term “intrinsic value” is used in the same way).

Example: For the classic hedonic utilitarian, pleasure is the sole thing of final value and pain is the sole thing of final disvalue; thus, final value is here generalizing the concept of moral goodness above.

Step 2. Ranking: explain how certain things relevant to the normative facts you care about are ranked in virtue of their conduciveness to things with final value.

Example: The normative facts of interest to the classic hedonic utilitarian are facts about the rightness and wrongness of actions, so actions are the relevant things to rank. The classic hedonic utilitarian says that actions can be ordered by calculating for each action the sum of the total final value in the consequences of that action.

Step 3. Normative Facts: explain how the normative facts are determined by facts about the rankings.

Example: The classic hedonic utilitarian says that an action a is right if and only if it is ranked at least as high as of any action that is an alternative to a.

2. Final Value and Veritism

Before looking at specific consequentialist epistemic theories, it is worth saying something about what epistemic consequentialists typically think about the first step in the recipe, which concerns final value. Many who are sympathetic to epistemic consequentialism also adhere to veritism (the term is due to Goldman 1999; Pritchard 2010 calls this view epistemic value t-monism). According to veritism, the only thing of final epistemic value is true belief and the only thing of final epistemic disvalue is false belief. Generalizing somewhat so that the view can capture approaches that think of belief as graded, we can say that according to veritism, the only thing of final epistemic value is accuracy and the only thing of final epistemic disvalue is inaccuracy. Not all epistemic consequentialists are veritists; others have thought that there is more to final epistemic value than mere accuracy, such as the informativeness or interestingness of the propositions believed, or whether the propositions believed are mutually explanatory or coherent. Others have thought that things such as wisdom (Whitcomb 2007), understanding (Kvanvig 2003), or a love of truth (Zagzebski 2003) have final epistemic value.

But even those consequentialists who think that accuracy does not exhaust what is epistemically valuable tend to think that accuracy is an important component of final epistemic value (for an alternative view, see Stich 1993). It is not hard to see why such a view is theoretically attractive. Although all explanations must come to an end somewhere, it seems that veritism, or at least something like it, is in a good position to give satisfying explanations of our epistemic norms. Veritism together with consequentialism can do so by showing how conforming to that norm conduces toward the goal of accuracy. If one could show, say, that by respecting one’s evidence one is likely to hold accurate beliefs, then one has a better explanation for an evidence-respecting norm than does the person who says such a norm is simply a brute epistemic fact.

Questions about final epistemic value are important for would-be epistemic consequentialists. This article notes the different views that epistemic consequentialists have held concerning final epistemic value, but there is little substantive discussion about the advantages and disadvantages of competing views about final epistemic value. That said, the debate concerning the nature of final epistemic value is an important debate for epistemic consequentialists to watch. In particular, the epistemic consequentialist will need a notion of final epistemic value according to which final epistemic value is the sort of thing that it makes sense to promote.

3. Consequentialist Theories

In light of the consequentialist recipe above, a specific epistemic consequentialist theory can be obtained by specifying the bearers of final epistemic value, the principle by which options are then ranked in terms of final epistemic value, and the normative facts that this ranking determines. Below, specific epistemic consequentialist theories are presented in this way.

a. A Simple Example

For illustrative purposes, consider a very simple consequentialist theory. According to this view, the only thing of final epistemic value is true belief. Then, say that a belief is justified to the extent that it garners epistemic value for the believer. This can be put in the consequentialist recipe as follows:

Step 1. Final Value: True beliefs have final epistemic value; false beliefs have final epistemic disvalue.

Step 2. Ranking: The normative facts at issue are facts about whether beliefs are justified, so beliefs are the natural thing to rank. According to this view, S’s belief that p is ranked above S’s belief that q iff the belief that p in itself and in its causal consequences garners more epistemic value for S than the belief that q.

Step 3. Normative Facts: The belief that p is justified iff it is ranked above every alternative to believing p.

One might think that this simple view has a relatively obvious flaw. It seems to imply that every true belief is justified and every false belief unjustified. This is what Maitzen (1995) argues:

If one seeks, above all else, to maximize the number of true (and minimize the number of false) beliefs in one’s (presumably large) stock of beliefs, then adding one more true belief surely counts as serving that goal, while adding a false belief surely counts as disserving it. (p. 870)

As clear as this seems, it is actually mistaken. For although the belief that p (when p is false) will not directly add value to S’s belief state, such a false belief may have an effect on other beliefs that S forms later and so, in total, be preferable to adopting the true belief that ~p. That said, no one has defended such a simple version of epistemic consequentialism. In actual practice, the relationship between final epistemic value and epistemic justifiedness is not proposed to be as direct as this simple view would have it. With that, we turn to examine such views.

b. Cognitive Decision Theory

Suppose that we think that rational agents have degrees of belief that can be represented by probability functions, but we think there are still important all-or-nothing epistemic options that these agents have regarding which propositions they accept as true. Patrick Maher (1993), for instance, argues that even if we think of scientists as having degrees of belief, we still need a theory of acceptance if we want to understand science. Why is this? Maher defines accepting that p as sincerely asserting that p (this is not the only definition of acceptance; van Fraassen (1980), though he is writing primarily about subjective probability, thinks of acceptance as a kind of cognitive commitment; Harman (1986, p. 47) sees acceptance as the same as belief and says that one accepts p when (1) one allows oneself to use p as part of one’s starting point for further reasoning and when (2) one takes the issue whether p to be closed in the sense that one is no longer investigating that issue). Further, Maher maintains that the scientific record tells us about which theories scientists asserted not about what credences scientists had. Thus, a theory of acceptance (in the sense of sincere assertion) is needed to understand science on Maher’s view.

If we think of things roughly in this way, then it is natural to turn to decision theory to determine what propositions agents should accept. Decision theory tells an agent which action it would be rational to perform based on a ranking of each action available to the agent in terms of the action’s expected value. To find the expected value of an action for an agent, one considers each set of consequences the agent thinks is possible given the performance of that action, and then sums up the value of those consequences, weighted by the agent’s degrees of belief that those consequences are realized conditional on that action. An action is then taken to be rational iff no other action is ranked higher than it in terms of expected value. When considering which proposition it would be rational for an agent to accept, it is natural to set things up similarly. Instead of evaluating the usual type of actions, one evaluates acts of acceptance of propositions that are available to the agent. These different acts of acceptance can be ranked in terms of the expected final epistemic value of each act of acceptance.

Such an approach to acceptance is briefly discussed by Hempel (1960). Isaac Levi (1967) presents a more complete theory of this kind. Levi imagines that a scientist has a set of mutually exclusive and jointly exhaustive hypotheses h­­1, h2,…,hn and that the scientist’s options for acts of acceptance are to accept one of the hi or to accept a disjunction of some of them. We suppose that scientists have subjective probability functions, which reflect the evidence that they have gathered with respect to the hypotheses in question. Levi’s basic proposal is that agents should accept some hypothesis (or disjunction of hypotheses) if so-doing maximizes expected final epistemic value where the weight for the expectation is provided by the subjective probability function (this is very similar to, though not identical to, the weighting in terms of degrees of belief mentioned above). What is final epistemic value for Levi (Levi uses the term “epistemic utility”)? According to Levi, final epistemic value has two dimensions that correspond to what the goals of any disinterested researcher ought to be. The first dimension is truth. True answers are valued more than false answers. The second dimension is “relief from agnosticism.” The idea here is that more-informative answers (for example, “X wins”) are valued more than less-informative answers (for example, “X or Y wins”). These values pull in opposite directions. One can easily accept a true proposition if informativeness is ignored as the disjunction “X wins or X does not win” is sure to be true. Similarly, one can easily accept an informative proposition if truth is ignored. Accordingly, Levi defines a family of functions that balance these two dimensions of value. He does not settle on one way of balancing, but instead considers as permissible the whole family functions that balance these two dimensions of value in different ways.

Several features of Levi’s approach are worth noting. First, note that on Levi’s view it can happen that the proposition a scientist should accept is not the one that the scientist sees as most probable, because final epistemic value is a function of both the truth/falsity of the proposition and its informativeness.

The second point worth noting brings us to an important distinction when considering epistemic consequentialism. Levi is interested in the expected final epistemic value of accepting some proposition h1, but where the value of the consequences of accepting h1 include only the value of accepting h1 and not the causal consequences of this acceptance. That is, suppose an agent has the option of accepting h1 or accepting h2. Suppose that h1 is both more likely to be true and more informative than 2. So on any weighting, and on any final epistemic value function, accepting h1 will rank higher than h2 if we ignore the later causal consequences of these acts of acceptance. But suppose that accepting h2 is known to open up opportunities for garnering much more final epistemic value later (perhaps by allowing one to work on a research project only open to those who accept h2). Levi’s theory says that the agent should accept h1, not h2. Thus, it is a form of consequentialism that ignores the causal consequences of the options being evaluated. What matters are not the causal consequences of accepting h1, but rather the expected final value of the acceptance of h­1 itself, ignoring its later causal consequences.

One might argue that this feature of Levi’s view is enough to make it thereby not a form of consequentialism, because it is not faithful to the idea that the total set of causal consequences of an option (for example, an action or a belief or an act of acceptance) is relevant to the normative verdict concerning that option. Be that as it may, there is still a teleological structure to Levi’s view: acts of acceptance inherit their normative properties in virtue of conducing to something with final epistemic value. It is just that “conducing” is construed noncausally, in this case as something more akin to instantiation (Berker (2013a,b) explicitly allows such views to count as instances of epistemic consequentialism or epistemic teleology—he uses both terms). For future reference, I will use the term “restricted consequentialism” to refer to views that are teleological in the sense of Levi’s view, but do not take the total set of causal consequences of an option to be relevant to its normative status. In section 5, this distinction is examined more carefully.

Cognitive decision theory fits into our consequentialist recipe as follows:

Step 1. Final Value: Accepting propositions that are true has final epistemic value, and accepting propositions that are informative has final epistemic value. The total final epistemic value of accepting a proposition is a function of both its truth and its informativeness, though the way that these values are balanced can permissibly differ from agent to agent.

Step 2. Ranking: The act of accepting some answer to a question is ranked according to its subjective expected final epistemic value.

Step 3. Normative Facts: One should accept answer a to question Q iff accepting a is ranked at least as high as every other alternative answer to Q.

For criticism of this approach, see Stalnaker (2002) and Percival (2002).

c. Accuracy First

Cognitive decision theory takes for granted that agents have a certain kind of doxastic state, represented by a probability function, and uses this to tell us about the norms for the different kind of doxastic state of acceptance. But suppose that one does not want to take for granted such an initial doxastic state. Does decision theory have anything to offer such an epistemic consequentialist?

James Joyce (1998) shows that the answer to this question is “yes” if we accept certain assumptions about final epistemic value that many find plausible. Joyce argues that degrees of belief—henceforth, credences—that are not probabilities are accuracy-dominated by credences that are probabilities. A credence function, c, is accuracy-dominated by another, c¢, when in all possible worlds, the accuracy of c¢ is at least as great as the accuracy of c, and in at least one world, the accuracy of c¢ is greater than the accuracy of c (for an introduction to possible worlds, see IEP article Modal Metaphysics). Joyce uses this, plus some assumptions about final epistemic value to establish probabilism, the thesis that rational credences are probabilities.

As Pettigrew (2013c) has noted, the basic Joycean framework requires one to do three things. First, one defines a final epistemic value function (often called an “epistemic utility function”). Second, one selects a decision rule from decision theory. Finally, one proves a mathematical theorem of the sort that says only doxastic states with certain features are permissible given the decision rule and final epistemic value function. Let us consider each of these steps in turn.

The final epistemic value functions that are typically used are different in kind than the functions used in cognitive decision theory. Whereas the final epistemic value functions in cognitive decision theory tend to value both accuracy—that is, truth and falsity—and informativeness, the final epistemic value functions in the Joycean tradition value only accuracy (this is why the moniker “accuracy first” is appropriate). Accuracy can be understood in different ways. There are two main issues here: (1) what counts as perfect accuracy? (2) how does one measure how far away a doxastic state is from perfect accuracy? With respect to (1), Joyce (1998) takes a credence function to be perfectly accurate at a world when the credence function matches the truth-values of propositions in that world (that is, assigns 1s to the truths and 0s to the falsehoods). Many have followed him in this, although there are alternatives (for example, one could think that a credence function is perfectly accurate at a world if it matches the chances at that world rather than the truth-values at that world). With respect to (2), things get more complicated. The appropriate mathematical tool to use to calculate the distance a credence function is from perfect accuracy is a scoring rule, that is, a function that specifies an accuracy score for credence x in a proposition relative to two possibilities: the possibility that the proposition is true and the possibility that it is false. There are many constraints that can be placed on scoring rules, but one popular constraint is that the scoring rule be proper. A scoring rule is proper if and only if the expected accuracy score of a credence of x in a proposition q, where the expectation is weighted by probability function P, is maximized at x = P(q). Putting together a notion of perfect accuracy and a notion of distance to perfect accuracy yields a final epistemic value function that is sensitive solely to accuracy. One proper scoring rule that is often used as a measure of accuracy is the Brier score. Let vw(q) be a function that takes value 1 if proposition q is true at possible world w and that takes value 0 if proposition q is false at possible world w. Thus, vw(q) merely tells us whether proposition q is true or false at possible world w. In addition, let c(q) be the credence assigned to proposition q, and let  be the set of propositions to which our credence function assigns credences. Then the Brier score for that credence function at possible world w is:
This will give us an accuracy score for every credence function for any world we please. Suppose, for example, that we are considering two credence functions defined over only the proposition q and its negation:

c1(q) = 0.75                c2(q) = 0.8

c1(~q) = 0.25             c2(~q) = 0.3

There are two possible worlds to consider: the world where q is true and the world where it is false. In the world (call it “w1”) where q is true, the Brier score for each credence function is as follows:

As one can verify, c1 scores better than c2 in a world where q is true. Now, consider a world where q is false (call this world “w2”):

Again, as one can verify, c1 scores better than c2 in a world where q is false.

Once one has a final epistemic value function, such as the Brier score, one must pick a decision rule. Joyce (1998) uses the decision rule that dominated options are impermissible. In the example immediately above, c1 is dominated by c2 because c1 scores better than or equal to c2 in every possible world. Thus, c2 is an impermissible credence function to have.

Our example considers only two very simple credence function. The final step in Joyce’s program is to prove a mathematical theorem that generalizes the specific thing we saw above. Joyce (1998) proves that for certain choices of accuracy measures, including the Brier score, every incoherent credence function is dominated by some coherent credence function, where a credence function is coherent iff it is a probability function. (Note that in our example, c2 is incoherent while c1 is coherent, thus illustrating an instance of this theorem.) Recall that probabilism is the thesis that rational credence functions are coherent. If we take permissible credence functions to be rational credence functions and if we can prove that no probabilistically coherent function is dominated by some probabilistically incoherent function—something that Joyce (1998) does not prove, but that is proven in Joyce (2009)—then we have a proof of probabilism from some assumptions about final epistemic value and about an appropriate decision rule.

Others have altered or extended this approach in various ways. One alteration of Joyce’s program is to use a different decision rule, for instance, the decision rule according to which permissible options maximize expected final epistemic value. Leitgeb and Pettigrew (2010a,b) use this decision rule to prove that no incoherent credence function maximizes expected utility.

The results can be extended to other norms, too. For instance, conditionalization is a rule about how to update one’s credence function in light of acquiring new information. Suppose that c is an agent’s credence function and ceis the agent’s credence function after learning e and nothing else. Conditionalization maintains that the following should hold:

For all a, and all e, c(a|e) = ce(a), so long as c(e)0.

In this expression, c(a|e) is the conditional probability of a, given e. Greaves and Wallace (2006) prove that, with suitable choices for accuracy measures, the updating rule conditionalization maximizes expected utility in situations where the agent will get some new information from a partition (a simple case of this is where an agent will either learn p or learn ~p). Leitgeb and Pettigrew (2010a,b) give an alternative proof that conditionalization maximizes expected utility.

Joyce is concerned with proving norms for degrees of belief. The approach can be extended to prove norms where all-or-nothing belief states are taken as primitive. Easwaran and Fitelson (2015) extend the approach in this way. Interestingly, their approach yields the result that some logically inconsistent belief states are permissible (for instance, in lottery cases). The approach has also been extended to comparative confidence rankings (where a comparative confidence ranking represents only certain qualitative facts about how confident an agent is in propositions—for instance, that she is more confident in p than in q). Williams (2012) has extended the approach in a different direction by examining cases where the background logic is nonclassical.

Joyce’s (1998) approach fits nicely into the consequentialist recipe (and subsequent work can be made to fit into the recipe, too):

Step 1. Final Value: Credences have final epistemic value in proportion to how accurate they are.

Step 2. Ranking: Credence functions are put into two classes: dominated credence functions and non-dominated credence functions.

Step 3. Normative Facts: A credence function is permissible to hold if and only if it is non-dominated.

In this way, the accuracy-first approach appears to be an especially “pure” version of epistemic consequentialism. The project is to work out what the epistemic norms are for doxastic states given that you care only about the accuracy of those doxastic states.

However, one prominent objection to the accuracy-first approach questions this. To see this, note that the verdicts about which credence functions dominate (or maximize expected epistemic value) are not sensitive to the total causal consequences of adopting a credence function as they only look at the expected epistemic value of that state and not the causal effects of the adoption of that state. There are really two points here. This first point is the same point that was noted with respect to cognitive decision theory: the accuracy-first program seems to be an instance of restricted consequentialism. This can make the view seem to not genuinely be a consequentialist view. Greaves (2013) raises some objections to the program along these lines; the issue she raises is very similar to the kinds of issues that Berker (2013a,b) and Littlejohn (2012) have raised in objections to epistemic consequentialism in traditional epistemology. The general worry is discussed below in section 5a.

The second point concerns a distinction that can be drawn between evaluating a doxastic state and evaluating the adoption of a doxastic state. The accuracy-first program seems to be interested in the former rather than the latter, which can make it seem further still from traditional consequentialism. This issue can be brought out by an example due to Michael Caie (2013). Suppose we are considering what the permissible credence function is with respect to only the propositions q and ~q where q is a self-referential proposition that says “q is assigned less than 0.5 credence.” This is an odd proposition in that if q is assigned less than 0.5 credence, then it is true (and so it would be more accurate to increase one’s credence in q), but if one increases one’s credence in q to 0.5 or greater, then q is false (and so it would be more accurate to decrease one’s credence in q). In such a situation, an incoherent credence function appears to dominate the coherent ones. To see this, note that there are no worlds where c(q) = 1, c(~q) = 0, and where q is true (because if c(q) =1, then q is false) or where c(q) = 0, c(~q) = 1, and where q is false (because if c(q) = 0, then q is true). The best that a coherent credence function can do is to assign c(q) = c(~q) = 0.5. In that case, q is false, and so the Brier score is 1.5. But compare this with the credence function, c*, according to which c*(q) = 0.5 and c*(~q) = 1. In that case, q is again false, and so c*(~q) gets a better score than does c(~q). Overall, c* gets a Brier score of 1.75.

How can this be, if we have proofs that probabilistically coherent credence functions dominate incoherent credence functions? The answer to this is that the proofs by Joyce and others assume a very strong kind of independence between belief states and possible worlds. Even though there is no world where c(q) = 1, c(~q) = 0, and where q is true, Joyce and others still consider such worlds when working out which credence functions dominate or maximize expected epistemic value. With these possible worlds back in play, the incoherent c* is dominated. In particular, for the desired results (that probabilism is true, that conditionalization is the correct updating rule, and so forth) to go through, we must be able to assess how accurate a doxastic state is in a world where that doxastic state could not be held. Further, we must maintain that facts about the accuracy of doxastic states in worlds where they cannot be held are sometimes relevant to our evaluation of a doxastic state in some other world where it is actually held. This might lead one to question whether this accuracy-first approach really is a form of epistemic consequentialism (though that is of course complicated by the fact that there is no consensus about what it takes to be a consequentialist theory) and indeed whether the evaluative framework can be motivated.

d. Traditional Epistemology: Justification

i. Coherentism

According to coherentism about justification, a belief is justified if and only if it belongs to a coherent system of beliefs (note that the term “coherent” here refers to some informal notion of coherence, perhaps related to, but distinct from, the notion of coherent credences). This on its own does not commit coherentists to any sort of epistemic consequentialism. However, some of the debates and claims made within the coherentist literature suggest that some prominent coherentists are committed to some form of epistemic consequentialism. For instance, in The Structure of Empirical Knowledge, BonJour (1985) defends a version of coherentism about justification. In this work, BonJour devotes an entire chapter to giving an argument for the following thesis:

A system of beliefs which (a) remains coherent (and stable) over the long run and (b) continues to satisfy the Observation Requirement is likely, to a degree which is proportional to the degree of coherence (and stability) and the longness of the run, to correspond closely to independent reality. (p. 171)

BonJour is thus attempting to show that the degree of coherence of a set of beliefs is proportional to the likelihood that those beliefs are true. He calls this a metajustification for his coherence theory of justification. And why is such a metajustification required? He writes:

The basic role of justification is that of a means to truth, a more directly attainable mediating link between our subjective starting point and our objective goal. […] If epistemic justification were not conducive to truth in this way, if finding epistemically justified beliefs did not substantially increase the likelihood of finding true ones, then epistemic justification would be irrelevant to our main cognitive goal and of dubious worth. […] Epistemic justification is therefore in the final analysis only an instrumental value, not an intrinsic one. (pp. 7–8)

This strongly suggests that BonJour thinks of the epistemic right—justification—in consequentialist terms (Berker (2013a) claims that BonJour (1985) should be understood in this way). If justification understood as coherence is not conducive to truth, then justification understood as coherence is not valuable. This suggests the following picture:

Step 1. Final Value: True beliefs have final epistemic value; false beliefs have final epistemic disvalue.

Step 2. Ranking: Sets of beliefs are ranked in terms of their degree of coherence where this degree of coherence is proportional to the likelihood that the set of beliefs is true.

Step 3. Normative Facts: A belief is justified iff it belongs to a set of beliefs that is coherent above some threshold.

The claim in Step 2, that coherence is truth-conducive, has been addressed explicitly in the literature, starting with Klein and Warfield (1994). They argue that the fact that one set of propositions is more coherent than another set does not entail that the conjunction of the propositions in the first set is more likely to be true than the conjunction of propositions in the second set. The basic argument is that a set of propositions (say, the set including a and b) can sometimes be made more coherent by adding an additional proposition to it (to yield the set including a, b, and c). However, the conjunction (a and b and c) is never more probable than the conjunction (a and b). Bovens and Hartmann (2003) and Olsson (2005) add to this literature and each prove results to the effect that no matter one’s measure of coherence, there will be cases where one set is more coherent than another, but its propositions are less likely. (For one response to these arguments, see Huemer (2011); Angere (2007) considers whether these arguments undermine BonJour’s coherentism.)

In light of difficulties establishing that coherence is truth-conducive, it is open to coherence theorists to not go down the consequentialist route. Such a coherentist might maintain that beliefs that are members of coherent sets are epistemically right independent of whether such sets are likely to be true. This mimics the non-consequentialist Kantian who maintains that certain actions are right independent of the final value that taking these actions leads to.

ii. Reliabilism

Reliabilism about justification, as championed by Alvin Goldman (1979), maintains that beliefs are justified when they are produced by suitably reliable processes. Put another way, beliefs are justified when produced by the right kinds of processes, and the right kinds of processes are those that are truth-conducive. One helpful way to think about the consequentialist structure of reliabilism is to think of it as analogous to rule utilitarianism. According to the rule utilitarian, we evaluate moral rules for rightness directly in terms of the consequences of their widespread acceptance. Actions are then evaluated in terms of whether or not they conform to a right rule. Similarly, according to reliabilism, the things up for direct consequentialist evaluation are not acts of acceptance or particular beliefs that could be adopted. Rather, processes of belief formation are evaluated consequentially. Reliabilists tend to see true belief as the sole thing of final epistemic value. Processes are thus evaluated based on their truth-ratios, the ratio of true beliefs produced to total beliefs produced. However, unlike a maximizing theory, reliabilism maintains that a process is acceptable just in case it has a truth-ratio above some absolute threshold. It is thus different from maximizing theories in two ways. First, a process can be acceptable even if it is not the most reliable process and thus not the optimally truth-conducive process. Second, a process need not be acceptable even if it is the most reliable process, because the reliabilist requires that processes meet some minimum threshold to be acceptable.

We can put a simple version of reliabilism about justification into our consequentialist recipe:

Step 1. Final Value: True beliefs have final epistemic value; false beliefs have final epistemic disvalue.

Step 2. Ranking: Processes are put into two classes: acceptable and not acceptable. If the process has a reliability score at or above the threshold, the process is acceptable; otherwise, it is not acceptable. The reliability score of a process p at world w is given by the sum of the true beliefs that process p produces at w divided by the sum of the total beliefs that process p produces at w (that is, the truth-ratio of p at w).

Step 3. Normative Facts: A belief is justified for S at t at w iff S’s belief at t at w is produced by an appropriate belief-forming process at w.

There are subtle ways in which reliabilism can differ from what the recipe above suggests. One of the most notable differences concerns Goldman’s (1986) approach. Although Goldman (1979) gives a theory that looks very much like what is represented above, in Goldman (1986) it is not individual processes that are ranked at Step 2, but rather systems of rules about which processes may and may not be used. A system of rules is then acceptable if and only if a believer who follows those rules has an overall truth-ratio above a certain threshold. Thus, the analogy to rule utilitarianism is even stronger in Goldman (1986) than in Goldman (1979), something which he explicitly notes. There has also been some dispute among reliabilists about the exact way that processes should be scored for their reliability (and so the exact form of Step 2), but despite that, the view looks to be committed to some form of consequentialism.

iii. Evidentialism

One of the main rivals of reliabilism about justification is evidentialism, initially defended by Richard Feldman and Earl Conee (1985) (whether evidentialism is a rival of coherentism depends subtly on exactly how the views are spelled out). Evidentialism maintains that the belief that p is justified for an agent at time t iff p is supported by the agent’s total evidence at t. Conee (1992) motivates the total evidence requirement with reference to an overriding goal of true belief, in which case evidentialists agree with reliabilists and with BonJour-style coherentists that justification is a matter of truth conduciveness. Feldman (2000) motivates the total evidence requirement with reference to an overriding goal of reasonable belief (rather than true belief), in which case evidentialists disagree with reliabilists and BonJour-style consequentialists about the nature of final epistemic value, but agree that justification should be spelled out in consequentialist terms. More recently, Conee and Feldman (2008) have suggested that what has final epistemic value is coherence. Whether this view is committed to consequentialism depends on how the details are spelled out. If the idea is that a doxastic state is justified in proportion to how much it promotes the value of coherence, whether in itself or in its causal consequences, then such a view is plausibly committed to consequentialism, with the good of coherence substituted for the good of true belief. However, there may be other ways of interpreting their view according to which it looks less committed to consequentialism.

It should be noted that Feldman (1998) makes clear that the only thing relevant to whether one should believe p is one’s evidence now concerning p’s truth. The causal consequences of believing p are explicitly ruled out by Feldman as relevant to that belief’s justificatory status. So if Feldman is to count as a consequentialist, it is of a very restricted sort. Presumably, Feldman holds something similar in Conee and Feldman (2008). Conee (1992), on the other hand, has expressed more sympathy with the idea that we should sometimes sacrifice epistemic value now for more epistemic value later. Thus, there is perhaps a stronger case that Conee’s version of evidentialism is also some form of consequentialism.

e. Traditional Epistemology Not Concerned with Justification

Stephen Stich (1990) offers a method of epistemic evaluation not concerned with justification, but that is committed to consequentialism. According to Stich, there are no special epistemic values (such as true belief), there are just things that people happen to value. Reasoning processes and reasoning strategies are seen as one tool that we use to get what we value. Stich (1993, p. 24) writes: “One system of cognitive mechanism is preferable to another if, in using it, we are more likely to achieve those things that we intrinsically value.” Thus, we have cognitive mechanisms being ranked in terms of their consequences, but where the consequences that matter are not uniquely epistemic, but rather anything that we happen to intrinsically value.

Richard Foley’s (1987) The Theory of Epistemic Rationality is not directed at analyzing justification. Nevertheless, it provides another example of work in traditional epistemology that seems to be committed to some form of epistemic consequentialism. Foley identifies our epistemic goal as that of now believing those propositions that are true and not now believing those propositions that are false. It is then epistemically rational for a person to believe a proposition whenever on careful reflection that person has reason to believe that believing that proposition will promote his or her epistemic goals, provided that all else is equal. Foley is clear, however, that he does not intend his view to sanction as rational adopting a belief that one is now confident is false in order to garner more true beliefs later. Thus, like some of the other views canvassed here, Foley adopts something like a consequentialist framework for evaluating beliefs, but in a restricted way, where the causal consequences of beliefs are not relevant to the normative verdicts of those beliefs.

Though a large focus of Goldman (1986) is to give a reliabilist account of justification, he notes that there are other important ways that processes, and thus that beliefs produced by those processes, can be evaluated. In particular, Goldman considers evaluating processes for their speed and for their power. The speed of a process concerns how quickly a process issues true beliefs. The power of a process concerns how much information a process gives to you. A highly reliable process might have very little speed if it takes a very long time to issue a belief. And the same highly reliable process might have very little power if it produces only that one belief. Goldman suggests that we can use a consequentialist-style analysis to evaluate processes in these ways, too.

Bishop and Trout (2005) argue against the practice of so-called standard analytic epistemology, which includes many of the approaches to justification looked at above. Bishop and Trout propose a view according to which we evaluate reasoning strategies by drawing on empirical work in psychology, rather than by consulting our intuitions. According to Bishop and Trout, the three factors that affect the quality of a reasoning strategy are: (1) whether the strategy is reliable across a wide range of problems, (2) the ease with which the strategy is used, and (3) the significance of the problems toward which the reasoning strategy can be used. They emphasize that whether a set of reasoning strategies is an excellent one to use depends on a cost/benefit analysis. It is natural, then, to think of their normative verdicts about whether a reasoning strategy is excellent as depending on the consequences of using that strategy along dimensions (1)–(3).

In this section and in the one before, we have seen that some traditional epistemologists with otherwise diverse views about justification or epistemic evaluation more generally seem to be committed, at bottom, to a kind of epistemic consequentialism. The aforementioned theories do not merely identify some bearer of final epistemic value, but also define one designator of epistemic rightness (for example, justification, rationality, epistemic excellence) in terms of such value.

f. Social Epistemology

Social epistemology is concerned with the way that social institutions, practices, and interactions are related to our epistemic endeavors, such as knowledge generation. Several prominent approaches within social epistemology also seem to be committed to some form of epistemic consequentialism.

Alvin Goldman’s (1999) Knowledge in a Social World is a nice example of social epistemology done with explicit commitments to consequentialism. Goldman writes:

People have interest, both intrinsic and extrinsic, in acquiring knowledge (true belief) and avoiding error. It therefore makes sense to have a discipline that evaluates intellectual practices by their causal contributions to knowledge or error. This is how I conceive of epistemology: as a discipline that evaluates practices along truth-linked (veritistic) dimensions. Social epistemology evaluates specifically social practices along these dimensions. (p. 69)

Goldman’s general approach is to adopt a question-answering model. According to this approach, beliefs in propositions have value or disvalue when those propositions are answers to questions that interest the agent. This suggests that Goldman promotes a view according to which final epistemic value is accuracy with respect to questions of interest, and not mere accuracy alone. As Goldman conceives of it, the epistemic value of believing a true answer to a question of interest is 1, the epistemic value of withholding belief to a true answer is 0.5, and the epistemic value of rejecting a true answer is 0. Goldman extends this to degrees of belief in that natural way: the epistemic value of having a degree of belief x in a true proposition is x. (It is worth noting that this corresponds to a scoring rule that is improper, compare section 3c.) We can then evaluate social practices instrumentally, in terms of their causal contributions to belief states that have final epistemic value. Goldman does this by first specifying the appropriate range of applications for a practice. This will involve actual and possible applications (because some practices do not have an actual track record). Second, one takes the average performance of the practice across these applications. The average performance of a practice determines how it is ranked compared to its competitors. Thus, on this view, it is something like objective expected epistemic value that ranks the various practices.

Consider an example. Goldman argues that civil-law systems are better, from an epistemic perspective, than are common-law systems. The argument for this is complex, but the general structure follows the framework described above. Goldman considers various differences between the two systems, including the numerous exclusionary evidentiary rules in the common-law system as compared to the civil-law system, the large role that adversarial lawyers play in the common-law system as compared to the civil-law system, and the fact that the civil-law system employs trained judges as decision-makers rather than lay jurors. With respect to each of these differences, one can approximate the epistemic value for the relevant decision-makers under each system. For instance, one can estimate how many correct verdicts compared to incorrect verdicts jurors would reach if there were exclusionary evidentiary rules compared to if there were not. On balance, Goldman argues, the civil-law system performs better. For another evaluation of legal structures in consequentialist terms, see Laudan (2006).

Goldman (1999) directs this same style of consequentialist argument toward a variety of social practices, including testimony, argumentation, Internet communication, speech regulation, scientific conventions, law, voting, and education.

Note, however, an important shift in the consequentialist view Goldman defends here compared to earlier theories considered. Previously, the things being evaluated have been belief states or acts of acceptance. Here, Goldman is evaluating social practices and methodologies. We could call the approach in Goldman (1999) an instance of methodological epistemic consequentialism, whereas the former theories are instances of doxastic epistemic consequentialism (note that this terminology is not standard and is introduced simply for clarity within this article).

The basic view can be put into our recipe as follows:

Step 1. Final Value: Accurate beliefs of S in answer to questions that interest S have final epistemic value.

Step 2. Ranking: Social practices are ranked according to the average amount of final epistemic value that they produce across the range of situations they can be applied to.

Step 3. Normative Facts: Social practice A is epistemically better than social practice B just in case A and B are alternatives to each other and A is ranked higher than B in Step 2.

For criticism of Goldman’s social epistemology that focuses specifically on its consequentialist commitments, see DePaul (2004). See also Fallis (2000, 2006).

g. Philosophy of Science

Though Goldman’s work in social epistemology touches on aspects of science, more generally his focus is on social practices. Others are interested in similar questions about social practices, structures, and conventions, but specifically with respect to science. In some of this work, there is a clear foundation of something like epistemic consequentialism.

i. Group versus Individual Rationality

Philip Kitcher (1990) is one of the first to apply formal models to social structures in science to determine the optimal structure for a group of researchers to achieve their scientific goals. The guiding idea behind his work is that if everyone were rational, then they would each make decisions about which projects to explore based on what the evidence supports and there would be a uniformity of practices among scientists. This uniformity would be bad, however, because it would prevent people from pursuing research on new up-and-coming theories (for example, continental drift in the 1920s) as well as on older outgoing theories (for example, phlogiston theory in the 1780s). Kitcher defines two notions: X’s personal epistemic intentions are what X wishes to achieve himself and X’s impersonal epistemic intentions are what X wishes his community to achieve. The question at hand can then be put: how would scientists rationally decide to coordinate their efforts if their decisions were dominated by their impersonal epistemic intentions?

Kitcher formalizes this situation by supposing that there are N researchers working on a particular research question, and each has to determine which research program she will pursue. Define a return function, Pi(n), which represents the chance that program i will be successful given that n researchers are pursuing it. Suppose that each researcher’s personal epistemic intention is to successfully answer the research question. In that case, each researcher will adopt whichever program i has the largest value for Pi(ni), where ni is the number of researchers currently pursuing i. However, if we suppose that each researcher’s impersonal epistemic intention is that someone in the community of researchers successfully answers the question, then this way of choosing research programs may not be the way to realize the impersonal epistemic intention. Consider a simple example where there are two research programs, 1 and 2, and N researchers. The best way to achieve the group goal is to maximize P1(n) +P2(N-n). But this could be a different distribution than the one that would result were each researcher to be guided by her personal epistemic intention. To see this suppose that there are j researchers in program 1 and k researchers in program 2. It could be that P1(j+1) > P2(k+1) and so a new researcher would choose program 1. But for all that, it could be that P1(j+1) - P1(j) < P2(k+1) - P2(k). That is, the boost in probability of success that program 2 gets from the addition of one more researcher is greater than that of program 1. In that case, it is better for the group for a new researcher to join program 2. Kitcher goes on to argue that certain intuitively unscientific goals such as the goal of fame or popularity could help motivate researchers into a division of labor that helps to reach the impersonal goals rather than the personal goals of each researcher.

Kitcher does not claim that there is one objective answer to what the appropriate epistemic intentions or values are. Nevertheless, there is a consequentialist structure to his argument. Groups of scientists are seen as rational when they choose among options in such a way that they maximize their chance of attaining their epistemic goals. One could question whether this is enough to make the view count as a version of epistemic consequentialism. After all, the options that the agents in Kitcher’s model are choosing between are not beliefs or belief states, but instead decisions about which research program to pursue or about which experiment to run. In this way, Kitcher’s view looks to be an instance of methodological epistemic consequentialism as opposed to doxastic epistemic consequentialism: it is aimed at evaluating actions that are in some way closely related to epistemic ends, rather than at evaluating belief states themselves. Some have argued that approaches such as these do not actually address properly epistemic questions at all. For some thoughts on this, see Christensen (2004, 2007).

Others have followed the general argumentative structure of Kitcher (1990). Zollman (2007, 2010) and Mayo-Wilson, Zollman, and Danks (2011) have focused on the communication networks that might exist between scientists working on the same project. This work reveals some surprising conclusions, in particular, that it might sometimes be epistemically beneficial for the community of scientists to have less than full communication among the members. The basic reason for this is that limiting communication is one way to encourage diversity in research programs, which for Kitcher-like reasons can help the community do better than it otherwise would. Muldoon and Weisberg (2009) and Muldoon (2013) have focused on the kinds of research strategies that individual scientists might have, modeling scientific research as a hill-climbing problem in the computer science literature. They show how it can sometimes be beneficial for the group of scientists to have individuals who are more radical in their exploration strategies.

So far we have surveyed formal models in the philosophy of science literature that seem to take a consequentialist approach to epistemic evaluation. One of the main results of this work is to show how strategies that would be irrational if followed in isolation might yield rational group behavior. Others have emphasized something like this point, but without formal models. Miriam Solomon (1992), for instance, argues for a similar conclusion by drawing on work in psychology and considering the historical data about the shift in geology to accept continental drift. She argues that certain seeming psychological foibles of individual geologists, including cognitive bias and belief preservation, played an important role in the discovery of plate tectonics. Paradoxically, she argues, these attributes that are normally seen as rational failings were in fact conducive to scientific success because they made possible the distribution of research effort. That her work employs a kind of consequentialist picture is evidenced by the fact that she views the central normative question in the philosophy of science to be: “whether or not, and where and where not, our methods are conducive to scientific success...Scientific rationality is thus viewed instrumentally.” (p. 443)

Larry Laudan is another philosopher of science who adopts a generally consequentialist outlook. For Laudan (1984), the things we are ultimately evaluating are methodological rules. Writes Laudan:

... a little reflection makes clear that methodological rules possess what force they have because they are believed to be instruments or means for achieving the aims of science. More generally, both in science and elsewhere, we adopt the procedural and evaluative rules we do because we hold them to be optimal techniques for realizing our cognitive goals or utilities. (1984, p. 26)

There is, on Laudan’s view, not one set of acceptable cognitive goals, although there are ways to rationally challenge the cognitive goals that someone holds. This can be done by either showing that the goals are unrealizable or showing that the goals do not reflect the communal practices that we endorse. On Laudan’s view, then, what has final epistemic value is the realizing of the cognitive goals that we have, so long as these goals are not ruled out in one of the ways above. We can then rank methodological rules, or groups of methodological rules, in virtue of how well they reach those cognitive goals that we have. We then evaluate those rules as rational or not in virtue of this ranking. Laudan does not say that the methodological rules must be optimal, but does suggest, as the quote above notes, that we must think that they are.

ii. Why Gather Evidence?

Another area of philosophy of science that seems committed to epistemic consequentialism concerns the initially odd-sounding question: why should a scientist gather more evidence? On its face, the answer to this question is obvious. But if we idealize scientists as perfectly rational agents, some models of rationality make the question more pressing. For instance, consider an austere version of the Bayesian account of epistemic rationality according to which one is epistemically rational if and only if one’s degrees of belief are probabilistically coherent and one updates one’s beliefs via conditionalization upon receipt of any evidence. An agent can do this perfectly well without ever gathering new evidence. In addition, notice that there is a risk associated with gathering new evidence. Although in the best-case scenario, one acquires information that moves one closer to the truth, it is of course possible that one gets misleading evidence and so is pushed further from the truth. Is there anything that can be said in defense of the intuitive verdict that despite this, it is still rational to gather evidence?

An early answer to this question is provided by I. J. Good (1967). Suppose that you are going to have to make a decision and you can perform an experiment first and then make the decision or you can simply make the decision. Good shows that if you choose by maximizing subjective expected value, if there is no cost of performing the experiment, and if several other constraints are imposed, then the subjective expected value of your choice is always at least as great after performing the experiment as before. Here then we have an argument in favor of a certain sort of epistemic behavior—gathering evidence—that is consequentialist at heart. It says that if you do this sort of thing, you can expect to make better choices. However, it is not clear that this is an epistemic consequentialist argument. At best, it suggests that experimenting is pragmatically rational. To drive this point home, note that it seems there are experiments that are epistemically rational to perform even if there is no reason to expect that any decision we will make depends on the outcome.

Others, however, have attempted to extend the basic Good result to scenarios where only final epistemic value is at issue. Oddie (1997), for instance, shows that if one uses a proper scoring rule to measure accuracy and if one updates via conditionalization, then the expected final epistemic value of learning information from a partition is always at least as great as refusing to learn the information. Myrvold (2012) generalizes this basic result and shows that something similar holds even if we do not require that one updates via conditionalization. Instead, so long as one satisfies Bas van Fraassen’s (1984) reflection principle, then something similar to Oddie’s result holds. For commentary on van Fraassen’s reflection principle, see Maher (1992). For other work on the issue of gathering evidence, see Maher (1990) and Fallis (2007).

Work in this area seems clearly committed to an especially veritistic form of epistemic consequentialism. Here we have an argument in favor of acquiring new evidence (if it is available) that appeals solely to the increase in accuracy one can expect to get from such evidence. As Oddie (1997, p. 537) writes: “The idea that a cognitive state has a value which is completely independent of where the truth lies is just bizarre. Truth is the aim of inquiry.”

4. Summing Up: Some Useful Distinctions

Now that we have surveyed a variety of theories that seem to have some commitment to epistemic consequentialism, it is useful to remind ourselves of two important distinctions relevant to categorizing different species of epistemic consequentialism.

First, some of the theories discussed above are committed to restricted consequentialism. According to these views, the normative facts about Xs are determined by some restricted set of the consequences of the Xs. More precisely, consider a theory that will issue normative verdicts about some belief b. A restricted consequentialist view maintains that something has final epistemic value, but that the normative facts about b are not determined by the amount of final epistemic value contained in the entire set of b’s causal consequences. In the limit, none of the causal consequences of b are relevant; only the final epistemic value contained in b itself is relevant. For instance, Feldman’s view about justification, Foley’s view about rationality, the approach of cognitive decision theory, and some versions of the accuracy-first program appear to be restricted consequentialist views in this limiting sense. Feldman, recall, explicitly states that the causal consequences of adopting a belief are irrelevant to its justificatory status; Foley focuses on the goal of now believing the truth and not now believing falsely, so excludes causal consequences; and Joyce’s accuracy-first program looks at whether some doxastic state dominates another doxastic state when the states are looked at for their accuracy now. Reliabilism is arguably also a form of restricted consequentialism, because the causal consequences of the belief itself are not relevant to its normative status; rather, it is the status of the particular process of belief formation that led to the belief that is relevant to the belief’s normative status. A process of belief formation earns its status, in turn, in terms of the proportion of true beliefs that it directly produces, so not even the total consequences of a belief-forming process are relevant according to the reliabilist.

Unrestricted consequentialist views, on the other hand, are those according to which the normative facts about whatever is being evaluated are determined by the amount of final epistemic value in the entire set of that thing’s causal consequences. It is unclear whether we have seen any wholly unrestricted consequentialist views in this sense, although Goldman’s approach to social epistemology and Kitcher’s approach to the distribution of cognitive labor may come close.

It is something of an open question whether a restricted consequentialism is genuinely a form of consequentialism. Some discussions of consequentialism in ethics suggest that restricted versions of consequentialism are not genuinely instances of consequentialism (see, for instance, Pettit (1988), Portmore (2007), Smith (2009), and Brown (2011)). Klausen (2009) argues that restricted versions of consequentialism are not genuinely instances of consequentialism, specifically with respect to epistemology.

The second important distinction to keep in mind when categorizing species of epistemic consequentialism is a distinction between those theories that seek to evaluate belief states and those that seek to evaluate some sort of action of some epistemic relevance. An example will make this distinction clearer. The accuracy-first program seeks to evaluate belief states based solely on their accuracy. Kitcher’s approach to the distribution of cognitive labor seeks to evaluate the decisions of scientists to engage in certain lines of research based on the ultimate payoff in terms of true belief for the scientific community. As noted above, we could call the first approach an instance of doxastic epistemic consequentialism and the second sort of approach an instance of methodological epistemic consequentialism (again, note that these terms are not established in the literature). With this distinction in hand, we can sort some of the theories above along this dimension. Attempts to explain why it is rational to gather evidence, much of social epistemology, and the work on communication structures and exploration strategies among scientists are instances of methodological epistemic consequentialism. Consequentialist analyses of justification, cognitive decision theory, and the accuracy-first program are instances of doxastic epistemic consequentialism.

5. Objections to Epistemic Consequentialism

Theories committed to some form of epistemic consequentialism will have specific objections that can be lodged against them. Here we will focus on general objections to the fundamental idea behind epistemic consequentialism.

a. Epistemic Trade-Offs

Epistemic consequentialists maintain that, in some way, the right option is one that is conducive to whatever has final epistemic value. Say that you accept a trade-off if you sacrifice something of value for even more of what is valuable. Thus, if true belief has final epistemic value (and if each true belief has equal final epistemic value), you accept a trade-off when you sacrifice a true belief concerning p for two true beliefs about q and r. It is hard to see how one can hold a consequentialist view and not think that it is at least sometimes permissible to accept trade-offs. For then it would seem that rightness is no longer being understood in terms of conduciveness to what has value (though, as we will see, restricted consequentialists of a certain sort may be able to deny this).

The permissibility of accepting trade-offs, however, constitutes a problem for epistemic consequentialism. If one thinks about consequentialist theories in ethics, this is not so surprising. Some of the strongest intuitive objections to consequentialist moral theories are those that focus on trade-offs. Consider, for instance, the organ harvest counterexample to utilitarianism (Thomson 1985). In that scenario, a doctor has five patients all in dire need of a different organ transplant. The doctor also has a healthy patient who is a potential donor for each of the five patients. Because it is a consequentialist moral theory and endorses trade-offs, it seems that utilitarianism says the doctor is required to sacrifice the one to save the five. But, it is alleged, this flies in the face of common sense, and so we have a challenge for utilitarianism.

Trade-off objections to epistemic consequentialism (structurally similar to the organ harvest) have been made explicitly by Firth (1981), Jenkins (2007), Littlejohn (2012), Berker(2013a,b), and Greaves (2013). And one can see hints of such an objection in Fitelson and Easwaran (2012) and Caie (2013).

The basic objection starts with the observation that a belief can be justified or rational or epistemically appropriate (or whatever other term for epistemic rightness one prefers) even if adopting that belief causes some epistemic catastrophe. Similarly, it seems that a belief can be unjustified or irrational or epistemically inappropriate even if adopting that belief results causally in some epistemic reward. For an example of the first sort, S might have significant evidence that he is an excellent judge of character and so S believing this about himself might be justified for S. But it could be that this belief serves to make S overconfident in other areas of his life and so S ends up misreading evidence quite badly in the long run. For an example of the second sort, S might have no evidence that God exists, but believe it anyway to make it more likely that S receives a large grant from a religiously affiliated (and unscrupulous) funding agency. The grant will allow S to believe many more true and interesting propositions than otherwise (the example is due to Fumerton (1995), p. 12). These kinds of examples seem to show that epistemic rightness cannot be understood in terms of conduciveness to what has epistemic final value.

There are two main responses that the epistemic consequentialist can make to the trade-off objection, and each comes with a challenge. The first response is to maintain that, appearances to the contrary, there are versions of epistemic consequentialism that do not sanction unintuitive trade-offs. For a response in this vein, see Ahlstrom-Vij and Dunn (2014). In ethics, some who think of themselves as consequentialists respond to analogous objections by introducing agent-relative values (see, for instance, Sen (1982) and Broome (1995)). The basic idea is that we can have agent-relative values in the outcomes of states, which allows, for example, for agent S to value the state where S breaks no promises more than someone else values that same state. This allows for one to give a consequentialist-based evaluation of rightness that does not always require one to say that it is right for S to break a promise in order to ensure that two others do not break their promises. It is not clear how such a modification of consequentialism would best carry over to epistemic consequentialism, but it could represent a way of making this first response. The challenge for any response in this vein is to explain how such views are genuinely an instance of epistemic consequentialism.

The second response to trade-off objections is to maintain that while epistemic consequentialism does sanction trade-offs, we can explain away the felt unintuitiveness of such verdicts. The challenge for this second response is to actually give such an explanation.

b. Positive Epistemic Duties

When it comes to moral obligation, it seems plausible that we sometimes have obligations to take certain actions and sometimes have obligations to refrain from certain actions. It is then natural to distinguish between positive duties—say, the obligation to take care of my children—and negative duties—say, the obligation to not steal from others. Consider how a similar distinction would be drawn in epistemology. Obligations to believe certain propositions would correspond to positive epistemic duties, while obligations to refrain from believing certain propositions would correspond to negative epistemic duties.

Littlejohn (2012) has argued that certain forms of epistemic consequentialism look as though they will naturally lead to positive epistemic duties. Suppose, as certain doxastic epistemic consequentialists will maintain, that whether we are obligated to believe or refrain from believing a proposition is a function of the final epistemic value of believing or refraining from believing that proposition. And suppose that the consequentialist also maintains that we have some negative epistemic duties; that is, there are situations where one is epistemically obligated to refrain from believing a proposition. The consequences of refraining in such a situation will have some level of epistemic value. But it seems that we can surely find a situation where believing a proposition has consequences with equal epistemic value. Thus, it looks as though the consequentialist is committed to saying that there are positive epistemic duties: sometimes we are obligated to believe propositions.

However, some epistemologists hold that we have no positive epistemic duties. We may be obligated to refrain from believing certain things, but we have no duties to believe. Nelson (2010) provides one argument for this claim. He argues that if we had positive epistemic duties, we would have to believe each proposition that our evidence supported. But this means we would be epistemically obligated to believe infinitely many propositions, as Nelson argues that any bit of evidence supports infinitely many propositions. As we cannot believe infinitely many propositions, Nelson holds that we have no positive epistemic duties.

The thesis that there are no positive epistemic duties is controversial, as is Nelson’s argument for that claim. Nevertheless, this presents a potential worry for certain versions of epistemic consequentialism. It is perhaps worth noting that this sort of objection to epistemic consequentialism is in some ways analogous to objections that maintain that consequentialist views in ethics are overly demanding. For more on the issue of positive epistemic duties, see Stapleford (2013) and the discussion in Littlejohn (2012, ch. 2).

c. Lottery Beliefs

Suppose that you know there is a lottery with 10,000 tickets, each with an equal chance of winning, but where only one ticket will win. Consider the proposition that ticket 1437 will lose. It is incredibly likely that this proposition is true, and the same is true for each of the n propositions that say that ticket n will lose. Nevertheless, a number of epistemologists maintain that one is not justified in believing such lottery propositions (for instance, BonJour (1980), Pollock (1995), Evnine (1999), Nelkin (2000), Adler (2005), Douven (2006), Kvanvig (2009), Nagel (2011), Littlejohn (2012), Smithies (2012), McKinnon (2013), and Locke (2014)).

Some consequentialist approaches to justification, however, look as though they will say that one is justified in believing such lottery propositions. For instance, suppose that there is a process of belief formation that issues beliefs of the form ticket n is a loser. This process is highly reliable and so beliefs produced by it are justified according to one version of reliabilism about justification. Some process reliabilists about justification might maintain that there is no such process in an attempt to avoid this implication of their view. However, as Selim Berker (2013b) has noted, the very structure of consequentialist views in epistemology looks as though there will be some case that can be brought against the consequentialist where some set of beliefs are justified purely in virtue of statistical information about the relative lack of falsehoods in a set of propositions.

Again, not all maintain that there is no justification to be had in such cases; some maintain that while such lottery propositions cannot be known, they nevertheless can be justified. But there are a number of epistemologists who maintain such a view and so we again have a potential worry here for the consequentialist. For a response to this worry, see Ahlstrom-Vij and Dunn (2014).

6. References and Further Reading

  • Adler, J. (2005) ‘Reliabilist Justification (or Knowledge) as a Good Truth-Ratio’ Pacific Philosophical Quarterly 86: 445–458.
  • Ahlstrom-Vij, K. and Dunn, J. (2014) ‘A Defence of Epistemic Consequentialism’ Philosophical Quarterly 64: 541–551.
  • Angere, S. (2007) ‘The Defeasible Nature of Coherentist Justification’ Synthese 157: 321–335.
  • Berker, S. (2013a) ‘Epistemic Teleology and the Separateness of Propositions’ The Philosophical Review 122: 337–393.
  • Berker, S. (2013b) ‘The Rejection of Epistemic Consequentialism’ Philosophical Issues 23: 363–387.
  • Bishop, M. and Trout, J. D. (2005) Epistemology and the Psychology of Human Judgment. Oxford: Oxford University Press.
  • BonJour, L. (1980) ‘Externalist Theories of Empirical Knowledge’ Midwest Studies in Philosophy 5: 53–74.
  • BonJour, L. (1985) The Structure of Empirical Knowledge. Cambridge, MA: Harvard University Press.
  • Bovens, L., and Hartmann, S. (2003) Bayesian Epistemology. Oxford: Oxford University Press.
  • Broome, J. (1991) Weighing Goods: Equality, Uncertainty and Time. Oxford: Wiley-Blackwell.
  • Brown, C. (2011) ‘Consequentialize This’ Ethics 121: 749–771.
  • Caie, M. (2013) ‘Rational Probabilistic Incoherence’ Philosophical Review 122: 527–575.
  • Christensen, D. (2004) Putting Logic in Its Place. Oxford: Oxford University Press.
  • Christensen, D. (2007) ‘Epistemology of Disagreement: The Good News’ Philosophical Review 116: 187–217.
  • Conee, E. (1992) ‘The Truth Connection’ Philosophy and Phenomenological Research 52: 657–669.
  • Conee, E. and Feldman, R. (2008) ‘Evidence’ In Q. Smith (Ed.), Epistemology: New Essays. Oxford: Oxford University Press: 83–104.
  • DePaul, M. (2004) ‘Truth Consequentialism, Withholding and Proportioning Belief to the Evidence’ Philosophical Issues 14: 91–112.
  • Douglas, H. (2000) ‘Inductive Risk and Values in Science’ Philosophy of Science 67: 559–579.
  • Douglas, H. (2009) Science, Policy, and the Value-Free Ideal. Pittsburgh, PA: University of Pittsburgh Press.
  • Douven, I. (2006) ‘Assertion, Knowledge, and Rational Credibility’ Philosophical Review 115: 449–485.
  • Easwaran, K. and Fitelson, B. (2012) ‘An “Evidentialist” Worry about Joyce’s Argument for Probabilism’ Dialectica 66: 425–433.
  • Easwaran, K. and Fitelson, B. (2015) ‘Accuracy, Coherence, and Evidence’ In T. Szabo Gendler and J. Hawthorne (Eds.), Oxford Studies in Epistemology, Volume 5. Oxford: Oxford University Press.
  • Evnine, S. (1999) ‘Believing Conjunctions’ Synthese 118: 201–227.
  • Fallis, D. (2000) ‘Veritistic Social Epistemology and Information Science’ Social Epistemology 14: 305–316.
  • Fallis, D. (2006) ‘Epistemic Value Theory and Social Epistemology’ Episteme 2: 177–188.
  • Fallis, D. (2007) ‘Attitudes Toward Epistemic Risk and the Value of Experiments’ Studia Logica 86: 215–246.
  • Feldman, R. (1998) ‘Epistemic Obligations’ Philosophy Perspectives 2: 236–256.
  • Feldman, R. (2000) ‘The Ethics of Belief’ Philosophy and Phenomenological Research 60: 667–695.
  • Feldman, R. and Conee, E. (1985) ‘Evidentialism’ Philosophical Studies 48: 15–34.
  • Firth, R. (1981) ‘Epistemic Merit, Intrinsic and Instrumental’ Proceedings and Addresses of the American Philosophical Association 55: 5–23.
  • Foley, R. (1987) The Theory of Epistemic Rationality. Cambridge, MA: Harvard University Press.
  • Fumerton, R. (1995) Metaepistemology and Skepticism. Lanham, MD: Rowman & Littlefield.
  • Goldman, A. (1979) ‘What Is Justified Belief?’ In G. Pappas (Ed.), Justification and Knowledge. Springer: 1–23.
  • Goldman, A. (1986) Epistemology and Cognition. Cambridge, MA: Harvard University Press.
  • Goldman, A. (1999) Knowledge in a Social World. Oxford: Oxford University Press.
  • Good, I. J. (1967) ‘On the Principle of Total Evidence’ British Journal for the Philosophy of Science 17: 319–321.
  • Greaves, H. (2013) ‘Epistemic Decision Theory’ Mind 122: 915–952.
  • Greaves, H. and Wallace, D. (2006) ‘Justifying Conditionalization: Conditionalization Maximizes Expected Epistemic Utility’ Mind 115: 607–632.
  • Haddock, A., Millar, A., and Pritchard, D. (2009) Epistemic Value (Eds) Oxford: Oxford University Press.
  • Harman, G. (1988) Change in View. Cambridge, MA: MIT Press.
  • Hempel, C. (1960) ‘Inductive Inconsistencies.’ Synthese 12: 439–469.
  • Huemer, M. (2011) ‘Does Probability Theory Refute Coherentism?’ Journal of Philosophy 108: 35–54.
  • Jenkins, C. S. (2007) ‘Entitlement and Rationality’ Synthese 157: 25–45.
  • Joyce, J. (1998) ‘A Nonpragmatic Vindication of Probabilism.’ Philosophy of Science 65: 575–603.
  • Joyce, J. (2009) ‘Accuracy and Coherence: Prospects for an Alethic Epistemology of Partial Belief’ In Huber and Schmidt-Petri (Eds.) Degrees of Belief. Springer: 263–300.
  • Kagan, S. (1997) Normative Ethics. Boulder, CO: Westview Press.
  • Klausen, S. H. (2009) ‘Two Notions of Epistemic Normativity’ Theoria 75: 161–178.
  • Klein, P. and Warfield, T. A. (1994) ‘What Price Coherence?’ Analysis 54: 129–132.
  • Kitcher, P. (1990) ‘The Division of Cognitive Labor’ The Journal of Philosophy 87: 5–22.
  • Kvanvig, J. (2003) The Value of Knowledge and the Pursuit of Understanding. Cambridge: Cambridge University Press.
  • Kvanvig, J. (2009) ‘Assertion, Knowledge and Lotteries’ In Greenough and Pritchard (Eds.), Williamson on Knowledge. Oxford: Oxford University Press: 140–160.
  • Laudan, L. (1984) Science and Values. Berkeley: University of California Press.
  • Laudan, L. (2006) Truth, Error, and Criminal Law. Cambridge: Cambridge University Press.
  • Leitgeb, H. and Pettigrew, R. (2010a) ‘An Objective Justification of Bayesianism I: Measuring Inaccuracy’ Philosophy of Science 77: 201–235.
  • Leitgeb, H. and Pettigrew, R. (2010b) ‘An Objective Justification of Bayesianism II: The Consequences of Minimizing Inaccuracy’ Philosophy of Science 77: 236–272.
  • Levi, I. (1967) Gambling with Truth. Cambridge, MA: MIT Press.
  • Littlejohn, C. (2012) Justification and the Truth Connection. Cambridge: Cambridge University Press.
  • Locke, D. T. (2014) ‘The Decision-Theoretic Lockean Thesis’ Inquiry 57: 28–54.
  • Maher, P. (1990) ‘Why Scientists Gather Evidence’ British Journal for the Philosophy of Science 41: 103–119.
  • Maher, P. (1992) ‘Diachronic Rationality’ Philosophy of Science 59: 120–141.
  • Maher, P. (1993) Betting on Theories. Cambridge: Cambridge University Press.
  • Maitzen, S. (1995) ‘Our Errant Epistemic Aim’ Philosophy and Phenomenological Research 55: 869–876.
  • Mayo-Wilson, C., Zollman, K. J., and Danks, D. (2011) ‘The Independence Thesis: When Individual and Social Epistemology Diverge’ Philosophy of Science 78: 653–677.
  • McKinnon, R. (2013) ‘Lotteries, Knowledge, and Irrelevant Alternatives’ Dialogue 52: 523–549.
  • McNaughton, D. and Rawling, P. (1991) ‘Agent-Relativity and the Doing-Happening Distinction’ Philosophical Studies 63: 163–185.
  • Muldoon, R. (2013) ‘Diversity and the Division of Cognitive Labor’ Philosophy Compass 8: 117–125.
  • Muldoon, R. and Weisberg, M. (2009) ‘Epistemic Landscapes and the Division of Cognitive Labor’ Philosophy of Science 76: 225–252.
  • Myrvold, W. (2012) ‘Epistemic Values and the Value of Learning’ Synthese 187: 547–568.
  • Nagel, J. (2011) ‘The Psychological Basis of the Harman-Vogel Paradox’ Philosophers’ Imprint 11: 1–28.
  • Nagel, T. (1986) The View from Nowhere. Oxford: Oxford University Press.
  • Nelkin, D. K. (2000) ‘The Lottery Paradox, Knowledge, and Rationality’ Philosophical Review 109: 373–409.
  • Nelson, M. (2010) ‘We Have No Positive Epistemic Duties’ Mind 119: 83–102.
  • Oddie, G. (1997) ‘Conditionalization, Cogency, and Cognitive Value’ British Journal for the Philosophy of Science 48: 533–541.
  • Nozick, R. (1974) Anarchy, State, and Utopia. New York: Basic Books.
  • Olsson, E. J. (2005) Against Coherence: Truth, Probability, and Justification. Oxford: Oxford University Press.
  • Percival, P. (2002) ‘Epistemic Consequentialism’ Proceedings of the Aristotelian Society Supplementary Volume 76: 121–151.
  • Pettigrew, R. (2012) ‘Accuracy, Chance, and the Principal Principle’ Philosophical Review 121: 241–275.
  • Pettigrew, R. (2013a) ‘A New Epistemic Utility Argument for the Principal Principle’ Episteme 10: 19–35.
  • Pettigrew, R. (2013b) ‘Accuracy and Evidence’ Dialectica 67: 579–596.
  • Pettigrew, R. (2013c) ‘Epistemic Utility and Norms for Credences.’ Philosophy Compass 8: 897–908.
  • Pettigrew, R. (2015) ‘Accuracy and the Belief-Credence Connection’ Philosophers’ Imprint. 15: 1–20.
  • Pettit, P. (2000) ‘Non-consequentialism and Universalizability’ The Philosophical Quarterly 50: 175–190.
  • Pettit, P. (1988) ‘The Consequentialist Can Recognise Rights’ The Philosophical Quarterly 38: 42–55.
  • Pollock, J. (1995) Cognitive Carpentry. Cambridge, MA: MIT Press.
  • Portmore, D. (2007) ‘Consequentializing Moral Theories’ Pacific Philosophical Quarterly 88: 39–73.
  • Pritchard, D., Millar, A., and Haddock, A. (2010) The Nature and Value of Knowledge: Three Investigations. Oxford: Oxford University Press.
  • Sen, A. (1982) ‘Rights and Agency’ Philosophy & Public Affairs 11: 3–39.
  • Smart, J. J. C. and Williams, B. (1973) Utilitarianism: For and Against. Cambridge: Cambridge University Press.
  • Smith, M. (2009) ‘Two Kinds of Consequentialism’ Philosophical Issues 19: 257–272.
  • Smithies, D. (2012) ‘The Normative Role of Knowledge’ Nous 46: 265–288.
  • Solomon, M. (1992) ‘Scientific Rationality and Human Reasoning’ Philosophy of Science 59: 439–455.
  • Stalnaker, R. (2002) ‘Epistemic Consequentialism’ Proceedings of the Aristotelian Society Supplementary Volume 76: 152–168.
  • Stapleford, S. (2013) ‘Imperfect Epistemic Duties and the Justificational Fecundity of Evidence’ Synthese 190: 4065–4075.
  • Stich, S. (1990) The Fragmentation of Reason. Cambridge, MA: MIT Press.
  • Thomson, J. J. (1985) ‘The Trolley Problem’ The Yale Law Journal 94: 1395–1415.
  • van Fraassen, B. (1984) ‘Belief and the Will’ The Journal of Philosophy 81: 235–256.
  • Whitcomb, D. (2007) An Epistemic Value Theory. (Doctoral dissertation) Retrieved from Rutgers University Community Repository at:
  • Williams, J. R. G. (2012) ‘Gradational Accuracy and Nonclassical Semantics’ The Review of Symbolic Logic5: 513–537.
  • Zagzebski, L. (2003) ‘Intellectual Motivation and the Good of Truth’ In Zagzebski, L. and DePaul, M. (Eds.) Intellectual Virtue: Perspectives from Ethics and Epistemology. Oxford University Press: 135–154.
  • Zollman, K. J. (2007) ‘The Communication Structure of Epistemic Communities’

Religious Epistemology

Belief in God, or some form of transcendent Real, has been assumed in virtually every culture throughout human history. The issue of the reasonableness or rationality of belief in God or particular beliefs about God typically arises when a religion is confronted with religious competitors or the rise of atheism or agnosticism. In the West, belief in God was assumed in the dominant Jewish, Christian and Islamic religions. God, in this tradition, is the omnipotent, omniscient, perfectly good and all-loving Creator of the universe (such a doctrine is sometimes called ‘bare theism’). This article considers the following epistemological issues: reasonableness of belief in the Judeo-Christian-Muslim God ("God," for short), the nature of reason, the claim that belief in God is not rational, defenses that it is rational, and approaches that recommend groundless belief in God or philosophical fideism.

Is belief in God rational? The evidentialist objector says “No” due to the lack of evidence. Theists who say “Yes” fall into two main categories: those who claim that there is sufficient evidence and those who claim that evidence is not necessary. Theistic evidentialists contend that there is enough evidence to ground rational belief in God, while Reformed epistemologists contend that evidence is not necessary to ground rational belief in God (but that belief in God is grounded in various characteristic religious experiences). Philosophical fideists deny that belief in God belongs in the realm of the rational. And, of course, all of these theistic claims are widely and enthusiastically disputed by philosophical non-theists.

Table of Contents

  1. Reason/Rationality
  2. The Evidentialist Objection to Belief in God
  3. The Reasonableness of Belief in God
    1. Theistic Evidentialism
    2. Sociological Digression
    3. Moral Analogy
    4. Reformed Epistemology
    5. Religious Experience
    6. Internalism/Externalism
    7. The Rational Stance
    8. Objections to Reformed Epistemology
  4. Groundless Believing
  5. Conclusion
  6. References and Further Reading

1. Reason/Rationality

Reason is a fallible human tool for discovering truth or grasping reality. Although reason aims at the truth, it may fall short. In addition, rationality is more a matter of how one believes than what one believes. For example, one might irrationally believe something that is true: suppose one believed that the center of the earth is molten metal because one believes that he or she travels there every night (while it’s cool). And one might rationally believe what is false: it was rational for most people twenty centuries ago to believe that the earth is flat. And finally, rationality is person and situation specific: what is rational for one person at a particular socio-historical time and place might not be rational for another person at a different time and place; or, for that matter, what is rational for a person in the same time and place may be irrational for another person in the same time and place. This has relevance for a discussion of belief in God because “the rationality of religious belief” is typically discussed abstractly, independent of any particular believer and often believed to be settled once and for all either positively or negatively (say, by Aquinas or Hume respectively). The proper question should be, “Is belief in God rational for this person in that time and place?”

Rationality is a normative property possessed by a belief or a believer (although I’ve given reasons in the previous paragraph to suggest that rationality applies more properly to believers than to beliefs). Just precisely what this normative property is is a matter of great dispute. Some believe that we have intellectual duties (for example, to acquire true beliefs and avoid false beliefs, or to believe only on the basis of evidence or argument). Some deny that we have intellectual duties because, by and large, beliefs are not something we freely choose (e.g., look outside at a tree, consider the tree and try to choose not to believe that there’s a tree there; or, close your eyes and if you believe in God, decide not to believe or vice versa and now decide to believe in God again). Since we only have duties when we are free to fulfill or to not fulfill them (“Ought implies can”), we cannot have intellectual duties if we aren’t free to directly choose our beliefs. So, the normative property espoused by such thinkers might be intellectual permissibility rather than intellectual duty.

Since the time of the Enlightenment, reason has assumed a huge role for (valid or strong) inference: rationality is often a matter of assembling available (often empirical, typically propositional) evidence and assessing its deductive or inductive support for other beliefs; although some beliefs may and must be accepted without inference, the vast majority of beliefs or, more precisely, the vast majority of philosophical, scientific, ethical, theological and even common-sensical beliefs rationally require the support of evidence or argument. This view of reason is often taken ahistorically: rationality is simply a matter of timeless and non-person indexed propositional evidence and its logical bearing on the conclusion. If it can be shown that an argument is invalid or weak, belief in its conclusion would be irrational for every person in every time and place. This violates the viable intuition that rationality is person- and situation- specific. Although one argument for belief in God might be invalid, there might be other arguments that support belief in God. Or, supposing all of the propositional evidence for God’s existence is deficient, a person may have religious experience as the grounds of her belief in God.

Following Thomas Reid, we shall argue that ‘rationality’ in many of the aforementioned important cases need not, indeed cannot, require (valid or strong) inference. Our rational cognitive faculties include a wide variety of belief-producing mechanisms, few of which could or should pass the test of inference. We will let this view, and its significance for belief in God, emerge as the discussion proceeds.

2. The Evidentialist Objection to Belief in God

Belief in God is considered irrational for two primary reasons: lack of evidence and evidence to the contrary (usually the problem of evil, which won’t be discussed in this essay). Note that both of these positions reject the rationality of belief in God on the basis of an inference. Bertrand Russell was once asked, if he were to come before God, what he would say to God. Russell replied, “Not enough evidence God, not enough evidence.” Following Alvin Plantinga, we will call the claim that belief in God lacks evidence and is thus irrational the evidentialist objection to belief in God.

The roots of evidentialism may be found in the Enlightenment demand that all beliefs be subjected to the searching criticism of reason; if a belief cannot survive the scrutiny of reason, it is irrational. Kant’s charge is clear: “Dare to use your own reason.” Given increasing awareness of religious options, Hobbes would ask: “If one prophet deceive another, what certainty is there of knowing the will of God, by any other way than that of reason?” Although the Enlightenment elevation of Reason would come to be associated with a corresponding rejection of rational religious belief, many of the great Enlightenment thinkers were themselves theists (including, for example, Kant and Hobbes).

The evidentialist objection may be formalized as follows:

(1) Belief in God is rational only if there is sufficient evidence for the existence of God.

(2) There is not sufficient evidence for the existence of God.

(3) Therefore, belief in God is irrational.

The evidentialist objection is not offered as a disproof of the existence of God—that is, the conclusion is not "God does not exist." Rather the conclusion is, even if God were to exist, it would not be reasonable to believe in God. According to the evidentialist objection, rational belief in God hinges on the success of theistic arguments. Prominent evidentialist objectors include David Hume, W. K. Clifford, J. L. Mackie, Bertrand Russell and Michael Scriven. This view is probably held by a large majority of contemporary Western philosophers. Ironically, in most areas of philosophy and life, most philosophers are not (indeed could not be) evidentialists. We shall treat this claim shortly.

The claim that there is not sufficient evidence for belief in God is usually based on a negative assessment of the success of theistic proofs or arguments. Following Hume and Kant, the standard arguments for the existence of God—cosmological, teleological and ontological—are judged to be defective in one respect or another.

The claim that rational belief in God requires the support of evidence or argument is usually rooted in a view of the structure of knowledge that has come to be known as ‘classical foundationalism.’ Classical foundationalists take a pyramid or a house as metaphors for their conceptions of knowledge or rationality. A secure house or pyramid must have secure foundations sufficient to carry the weight of each floor of the house and the roof. A solid, enduring house has a secure foundation with each of the subsequent floors properly attached to that foundation. Ultimately, the foundation carries the weight of the house. In a classical foundationalist conception of knowledge, the foundational beliefs must likewise be secure, enduring and adequate to bear “the weight” of all of the non-foundational or higher-level beliefs. These foundational beliefs are characterized in such a manner to ensure that knowledge is built on a foundation of certitudes (following Descartes). The candidates for these foundational certitudes vary from thinker to thinker but, broadly speaking, reduce to three: if a belief is self-evident, evident to the senses, or incorrigible, it is a proper candidate for inclusion among the foundations of rational belief.

What sorts of beliefs are self-evident, evident to the senses, or incorrigible? A self-evident belief is one that, upon understanding it, you see it to be true. While this definition is probably not self-evident, let’s proceed to understand it by way of example. Read the following fairly quickly:

(4) When equals are added to equals you get equals.

Do you think (4) is true? False? Not sure? Let me explain it. When equals (2 and 1+1) are added to equals (2 and 1+1) you get equals (4). Or, to make this clear 2 + 2 = 1 + 1 + 1 + 1. Now that you understand (4), you see it to be true. I didn’t argue for (4), I simply helped you to understand it, and upon understanding it, you saw it to be true. That is, (4) is self-evident. Typical self-evident beliefs include the laws of logic and arithmetic and some metaphysical principles like “An object can’t be red all over and blue all over at the same time.” A proposition is evident to the senses in case it is properly acquired by the use of one’s five senses. These sorts of propositions include “The grass is green,” “The sky is blue,” “Honey tastes sweet,” and “I hear a mourning dove.” Some epistemologists exclude propositions that are evident to the senses from the foundations of knowledge because of their lack of certainty [the sky may be colorless as a piece of glass but simply refracts blue light waves; we may be sampling artificial (and not real) honey; or someone may be blowing a bird whistle; etc.]. In order to ensure certainty, some have shifted to incorrigibility as the criterion of foundational beliefs. Incorrigible beliefs are first-person psychological states (seeming or appearance beliefs) about which I cannot be wrong. For example, I might be mistaken about the color of the grass or sky but I cannot be mistaken about the following: “The grass seems green to me” or “The sky appears to me to be blue.” I might be mistaken about the color of grass, and so such a belief is not certain for me, but I can’t be wrong about what the color of grass seems to be to me.

Now let us return to belief in God. Why do evidentialists hold (1), the claim that rational belief in God requires the support of evidence or argument? This is typically because they subscribe to classical foundationalism. A belief can be held without argument or evidence only if it is self-evident, evident to the senses, or incorrigible. Belief in God is not self-evident—it is not such that upon understanding the notion of God, you see that God exists. For example, Bertrand Russell understands the proposition “God exists” but does not see it to be true. So, belief in God is not a good candidate for self-evidence. Belief in God is not evident to the senses because God, by definition, transcends the sensory world. God cannot be seen, heard, touched, tasted or smelled. When people make claims such as “God spoke to me” or “I touched God,” they are using “spoke” and “touched” in a metaphorical sense, not a literal sense; literally, God is beyond the senses. So God’s existence is not evident to the senses. And finally, a person might be wrong about God’s existence and so belief in God cannot be incorrigible. Of course, “it seems to me that God exists” could be incorrigible but God’s seeming existence is a long way from God’s existence!

So, belief in God is neither self-evident, evident to the senses, nor incorrigible. Therefore, belief in God, according to classical foundationalism, cannot properly be included among the foundations of one’s rational beliefs. And, if it is not part of the foundations, it must be adequately supported by the foundational beliefs—that is, belief in God must be held on the basis of other beliefs and so must be argued to, not from. According to classical foundationalism, belief in God is not rational unless it is supported by evidence or argument. Classical foundationalism, as assumed in the Enlightenment, elevated theistic arguments to a status never held before in the history of Western thought. Although previous thinkers would develop theistic arguments, they seldom assumed that they were necessary for rational belief in God. After the period of the Enlightenment, thinkers in the grips of classical foundationalism would now hold belief in God up to the demand of rigorous proof.

3. The Reasonableness of Belief in God

There are two main strategies theists employ when responding to the evidentialist objection to belief in God. The first strategy is to argue against the second premise, the claim that there is insufficient evidence for the existence of God. The second strategy is to argue against the first premise, the claim that belief in God is rational only if it is supported by sufficient evidence.

a. Theistic Evidentialism

Consider first the claim that there is not sufficient evidence for the existence of God. This view has been historically rejected by Aristotle, Augustine, Anselm, Thomas Aquinas, John Duns Scotus, John Locke, William Paley and C. S. Peirce, to name but a few. But suppose we all agreed that the arguments offered by Aristotle and others for the existence of God were badly flawed. (“We know better now.”) Does that imply that earlier theists were irrational? Does the evidence have to support, in some timeless way—irrespective of any particular person—belief in God? Aristotle, Augustine, Aquinas, et al., were brilliant people doing the best they could with the most sophisticated belief-set available to them and judged, on the basis of their best lights, that the evidence supported belief in God. Are they nonetheless irrational? For example, suppose that, ignorant of the principle of inertia, Aquinas believed that God must be actively involved in the continual motion of the planets. That is, suppose that, using the best physics of his day, Aquinas believed in the scientific necessity of belief in God. According to his best lights, Aquinas thought that the evidence clearly supported belief in God. Would Aquinas be irrational? Evidentialist objectors might concede that Aquinas was not irrational, in spite of his bad arguments and, therefore, might not view rationality as being timeless. But, they would argue, it is no longer reasonable for anyone to believe in God because now we all see or should see that the evidence is clearly insufficient to support the conclusion that God exists. (This ‘we’ tends toward the princely philosophical.)

Some theists reject this conclusion, judging that there is adequate evidence to support God’s existence. Rejecting the idea that theistic arguments died along with Kant and Hume, these thinkers offer new evidence or refashion the old evidence for the existence of God. William Lane Craig (Craig and Smith 1993), for example, has developed a new version of the old Islamic Kalaam cosmological argument for the existence of God. This argument attempts to demonstrate the impossibility that time could have proceeded infinitely into the past so the universe must have had a beginning in time. In addition, both physicists and philosophers have argued that the apparent fine-tuning of the cosmological constants to permit human life is best explained by God’s intelligent superintendence. And some argue that irreducibly complex biological phenomena such as cells or kidneys could not have arisen by chance. Robert Merrihew Adams (1987) has revived moral arguments for the existence of God. Alvin Plantinga (1993b) has argued that naturalism and evolution are self-refuting. William Alston (1991) has defended religious experience as a source of justified belief in the existence of God. In addition, theistic arguments have been developed that are based on the existence of flavors, colors and beauty. And some thinkers, such as Richard Swinburne (1979, 1984), contend that the cumulative forces of these various kinds of evidence mutually reinforce the likelihood of God’s existence. Thus, there is an ample lot defending the claim that belief in God is rational based on the evidence (and an equal and opposite force opposing them). So the project of securing belief in God on the basis of evidence or argument is ongoing.

Many theists, then, concur with the evidentialist demand for evidence and seek to meet that demand by offering arguments that support the existence of God. Of course, these arguments have been widely criticized by atheistic evidentialists. But for better or for worse, many theistic philosophers have hitched the rationality of belief in God to the wagon of evidence.

Now suppose, as is the case, that the majority of philosophers believes that these attempts to prove God’s existence are feeble failures. Would that perforce make religious believers irrational? If one, by the best of one’s lights, judges that God exists given the carefully considered evidence, is one nonetheless irrational if the majority of the philosophical community happen to disagree? These questions suggest that judgments of rationality and irrationality are difficult to make. And, it suggests that rationality and irrationality may be more complicated than classical foundationalism assumes.

b. Sociological Digression

Very few philosophical positions (and this is an understatement) enjoy the kind of evidential support that classical foundationalism demands of belief in God; yet most of these are treated as rational. No philosophical position—belief in other minds, belief in the external world, the correspondence theory of truth or Quine’s indeterminacy of translation thesis—is properly based on beliefs that are self-evident, evident to the senses, or incorrigible. Indeed, we may question whether there is a single philosophical position that has been so amply justified (or could be). Why is belief in God held to a higher evidential standard than other philosophical beliefs? Some suggest that this demand is simply arbitrary at best or intellectually imperialist at worst.

c. Moral Analogy

Consider your moral beliefs. None of these beliefs will be self-evident, evident to the senses, or incorrigible. Now suppose you hold a moral belief that is not the philosophical fashion these days. Would you be irrational if the majority of contemporary philosophers disagreed with you? Perhaps you’d be irrational if moral beliefs contrary to yours could be established on the basis of widely known arguments from premises that are self-evident, evident to the senses, or incorrigible. But there may be no such arguments in the history of moral theory. Moral beliefs are not well-justified on the basis of argument or evidence in the classical foundationalist sense (or probably in any sense of “well-justified”). So, the fact that the majority of contemporary philosophers reject your moral beliefs (or belief in God for that matter) may have little or no bearing on the rationality of your beliefs. The sociological digression and moral analogy suggest that the philosophical emphasis on argument, certainty, and consensus for rationality might be misguided.

d. Reformed Epistemology

Let us now turn to those who reject the first premise of the evidentialist objection to belief in God, the claim that rational belief in God requires the support of evidence or argument. Recent thinkers such as Alvin Plantinga, Nicholas Wolterstorff and William Alston, in their so called Reformed Epistemology, have argued that belief in God does not require the support of evidence or argument in order for it to be rational (cf. Plantinga and Wolterstorff 1983). In so doing, they reject the evidentialist objector’s assumptions about rationality.

Reformed epistemologists argue that the first problem with the evidentialist objection is that the universal demand for evidence simply cannot be met in a large number of cases with the cognitive equipment that we have. No one has ever been able to offer proofs for the existence of other persons, inductive beliefs (e.g., that the sun will rise in the future), or the reality of the past (perhaps, as Bertrand Russell cloyingly puzzled, we were created five minutes ago with our memories intact) that satisfy classical foundationalist requirements for proof. So, according to classical foundationalism, belief in the past and inductive beliefs about the future are irrational. This list could be extended indefinitely.

There is also a limit to the things that human beings can prove. If we were required to prove everything, there would be an infinite regress of provings. There must be some truths that we can just accept and reason from. Thus, we can’t help but trust our cognitive faculties. Moreover, it seems that we will reach the limit of proof very quickly if, as classical foundationalism insists, the basis for inference includes only beliefs that are self-evident, evident to the senses, or incorrigible. For these reasons, reformed epistemologists doubt that classical foundationalists are correct in claiming that the proper starting point of reason is self-evidence, evidence to the senses, and incorrigibility.

A second criticism of classical foundationalism, first offered by Plantinga, is that it is self-referentially inconsistent. That is, classical foundationalism must be rejected by its own account. Recall classical foundationalism (CF):

A proposition p is rational if and only if p is self-evident, evident to the senses or incorrigible or if p can be inferred from a set of propositions that are self-evident, evident to the senses, or incorrigible.

Consider CF itself. Is it rational, given its own conditions, to accept classical foundationalism? Classical foundationalism is not self-evident: upon understanding it many people believe it false. If one can understand a proposition and reject it, that proposition cannot be self-evident. CF is also not a sensory proposition—one doesn’t see, taste, smell, touch or hear it. So, classical foundationalism is not evident to the senses. And even if one should accept classical foundationalism, one might be wrong; so classical foundationalism is not incorrigible. Since classical foundationalism is neither self-evident, evident to the senses nor incorrigible, it can only be rationally maintained if it can be inferred from propositions that (ultimately) are self-evident, evident to the senses or incorrigible. Is that possible? Consider a representative set of evidential propositions, E, that are self-evident, evident to the senses or incorrigible:

Evidence (E):


    • When equals are added to equals you get equals.
    • 2 + 2 = 4
    • Grass is green.
    • The sky is blue.
    • Grass seems green to me.
    • The sky appears to me to be blue.


Limiting yourself to propositions that are self-evident, evident to the senses or incorrigible, you can expand this list as exhaustively as you like. We have enough in E to make our case. Given E as evidence, can CF be inferred? Is E adequate evidence for CF? It’s hard to imagine how it could be. Indeed all of the propositions in E are irrelevant to the truth of CF. E simply cannot logically support CF. So, CF is not self-evident, evident to the senses or incorrigible, nor can CF be inferred from a set of propositions that are self-evident, evident to the senses or incorrigible. So, CF, by its own account, is irrational. If CF were true, it would be irrational to accept it. Better simply to reject it!

Thomas Reid (1710-1796), whom Plantinga and Wolterstorff follow, was an early critic of classical foundationalism. Reid argued that we have been outfitted with a host of cognitive faculties that produce beliefs that we can reason from (the foundations of believings). Plantinga calls these basic beliefs. The kinds of beliefs that we do and must reason to is a small subset of the kinds of beliefs that we do and must reason from. The latter must be accepted without the aid of proof. In most cases we must rely on our intellectual equipment to produce beliefs in the appropriate circumstances, without evidence or argument. For example, we simply find ourselves believing in other persons. A person is a center of self-conscious thoughts and feelings and first-person experience. While we can see a human face or a body, we can’t see another’s thoughts or feelings. Consider a person, Emily, whose leg is poked with a needle. We can see Emily recoil and her face screw up, and we can hear her yelp. So we can see Emily’s pain-behavior, but we cannot see her pain. The experience of pain is just the sort of inner experience that is typical of persons. For all we can know from Emily’s pain-behavior, she might be a cleverly constructed automaton (like Data of Star Trek fame or an exact human replica all the way down to the neurons). Or, for all we know, Emily might be a person just like us with the characteristic interior life and experience of persons. The point is, you can’t tell, just from Emily’s pain behavior, if she has any inner experience of pain. So you can’t tell by the things to which you have evidential access if Emily is a person. No one has ever been able to develop a successful argument to prove that there are other persons. So if classical foundationalism were true, it would not be reasonable to believe in the existence of other persons. But surely there are other persons whose existence it is reasonable to accept. So much the worse for classical foundationalism, Reidians say. Similar problems arise for classical foundationalism concerning beliefs in the past, the future, and the external world. No justification-conferring inference is or could be involved. Yet, the Reidian claims, we are perfectly within our epistemic rights in holding these basic beliefs. Thus, we should conclude that these beliefs are properly basic (that is, non-inferential but justified beliefs) and should reject classical foundationalism’s claim to the contrary.

Granting that a great many of our important beliefs are non-inferential, could one reasonably find oneself believing in God without evidence or argument? ‘Evidence’ is to be understood here as most evidentialists understand it, namely as the kind of propositional evidence one might find in a theistic argument and not the kind of experiential evidence typically thought to ground religious belief. Could belief in God be properly basic?

There are at least two reasons to believe that it might be rational for a person to accept belief in God without the support of an argument. The first is a parity argument. We must, by our nature, accept the deliverances of our cognitive faculties, including those that produce beliefs in the external world, other persons, that the future will be like the past, the reality of the past, and what other people tell us—just to name a few. For the sake of parity, we should trust the deliverances of the faculty that produces in us belief in the divine (what Plantinga (2000), following John Calvin, calls the sensus divinitatus, the sense of the divine). Of course, some philosophers deny that we have a sensus divinitatus and so reject the parity argument. The second reason is that belief in God is more like belief in a person than belief in a scientific hypothesis. Human relations demand trust, commitment, and faith. If belief in God is more like belief in other persons than belief in atoms, then the trust that is appropriate to persons will be appropriate to God. William James offers a similar argument in “The Will to Believe.”

Reformed epistemologists hold that one can reasonably believe in God—immediately and basically—without the support of an argument. One’s properly functioning cognitive faculties can produce belief in God in the appropriate circumstances with or without argument or evidence.

e. Religious Experience

Although Plantinga contends that belief in God does not require the support of propositional evidence or argument (like a theistic proof) in order to be rational, he does contend that belief in God is not groundless. According to Plantinga, belief in God is grounded in characteristic religious experiences such as beholding the divine majesty on the top of a mountain or the divine creativity when noticing the articulate beauty of the flower. Other sorts of alleged religious experiences involve a sense of guilt (and forgiveness), despair, the inner testimony of the Holy Spirit, or direct contact with the divine (mysticism). The experience of many believers is so vivid that they describe it with sensory metaphors: they claim to see, hear or be touched by God.

It is important to note that people who believe on the basis of religious experience do not typically construe their belief in God as based on an argument (any more than belief in other persons is based on an argument). They believe they have seen or heard God directly and find themselves overwhelmed by belief in God. Religious experience is typically taken as self-authenticating. In good Reidian fashion, one might simply take it that one has a cognitive faculty that can be trusted when it produces belief in God when induced by the appropriate experiences; that is, one is permitted to trust one’s initial alleged religious experience as veridical, just as one must trust that others of one’s cognitive faculties are veridical. (It should be noted that Reid himself does not make this claim. He believes that God’s existence can and should be supported by argument.) Richard Swinburne alleges that it is also reasonable to trust what others tell us unless and until we have good reason to believe otherwise. So, it would be reasonable for someone who did not have a religious experience to trust the veridicality of someone who did claim to have a religious experience. That is, it would be reasonable for everyone, not just the subject of the alleged religious experience, to believe in God on the basis of that alleged religious experience.

Some philosophers reject religious experience as a proper ground for religious belief. While not denying that some people have had powerful, so-called mystical experiences, they deny that one can reliably infer from that experience that the source or cause of that experience was God. Even the most enthusiastic mystics contend that some mystical experiences are illusory. So, how does one sort out the veridical from the illusory without begging the question? And if other evidence must be brought in to assess the validity of religious experience, is not then religious belief based more on that evidence than on the immediate experience? William Alston (1991) responds to these sorts of challenges by noting that perceptual experience, which is seldom questioned, is afflicted with precisely the same problems. Yet we do not take perceptual beliefs to be suspect. Alston argues that if religious experiences and the beliefs they produce relevantly resemble perceptual experiences and the beliefs they produce, then we should not hold beliefs based upon religious experience to be suspect either.

f. Internalism/Externalism

Some of the most important issues concerning the rationality of religious belief are framed in terms of the distinction between internalism and externalism in epistemology. Philosophers who are internalists with respect to rationality argue that we can tell, from the inside so to speak, if our beliefs are rationally justified. The language used by the classical foundationalist to describe basic beliefs is thoroughly internalist. ‘Self-evident’ and ‘evident to the senses’ are suggestive of beliefs that have a certain inner, compelling and unquestionable luminosity; one can simply inspect one’s beliefs and “see” if they are evident in the appropriate respects. And since deductive inference transfers rational justification from lower levels to higher levels, by carefully checking the inferential relations among one’s beliefs, one can see this luminosity passing from basic to non-basic beliefs. So internalists believe that rationality is something that can be discerned by the mental inspection of one’s own beliefs, items to which one has direct cognitive access.

Plantinga, on the other hand, argues that modern foundationalism has misunderstood the nature of rational justification. Plantinga calls the special property that turns true belief into knowledge “warrant.” According to Plantinga, a belief has warrant for one if and only if that belief is produced by one’s properly functioning cognitive faculties in circumstances to which those faculties are designed to apply; in addition, those faculties must be designed for the purpose of producing true beliefs. So, for instance, my belief that ‘there is a computer screen in front of me’ is warranted only if it is produced by my properly functioning perceptual faculties (and not by weariness or dreaming), if no one is tricking me, say, by having removed my computer and replaced it with an exact painting of my computer (thereby messing up my cognitive environment), and if my perceptual faculties have been designed (by God) for the purpose of producing true beliefs. Only if all of these conditions are satisfied is my belief that there is a computer screen in front of me warranted.

Note the portions of Plantinga’s definition which are not within one’s internal or direct purview: whether or not one’s faculties are functioning properly, whether or not one’s faculties are designed by God, whether or not one’s faculties are designed for the production of true beliefs, whether or not one is using one’s faculties in the environment intended for their use (one might be seeing a mirage and taking it for real). According to Plantinga’s externalism we cannot acquire warrant simply by attending to our beliefs. Warranted belief (knowledge) depends on circumstances external to the believing agent and so is not entirely up to us. Warrant depends crucially upon whether or not conditions that are not under our direct rational purview or conscious control are satisfied. If externalism is correct, then classical foundationalism has completely misunderstood the nature of epistemic warrant.

g. The Rational Stance

Because of the possibility of error, those who accept belief in God as a basic belief should nonetheless be concerned with evidence for and against belief in God. Following Reid, Reformed epistemologists contend that belief begins with trust (not suspicion, as the evidentialist apparently claims). Beliefs are, in their terms, innocent until proven guilty rather than guilty until proven innocent. In order to grasp reality, we must use and trust our cognitive faculties or capacities. But we also know that we get things wrong. The deliverances of our cognitive faculties are not infallible. Reid, Plantinga and Wolterstorff are keenly aware of human fallibility and recognize the need for a deliberative (reasoning) faculty that helps us adjudicate apparent conflicts among beliefs delivered innocently by our cognitive faculties. Reid’s general approach to rational belief is this: trust the beliefs produced by your cognitive faculties in the appropriate circumstances, unless you have good reason to reject them.

Let’s press the problem of error. As shown by widespread disagreement, our cognitive faculties seem less reliable in matters of fundamental human concern such as the nature of morality, the nature of persons, social and political thought, and belief in God. Given that rationality is truth-aimed, Reformed epistemologists should be willing to do two things to make the attainment of that goal more likely. First, they ought to seek, as best they can, supporting evidence for immediately produced beliefs of fundamental human concern. Because evidence is truth-conducive, it can lend credence to a basic belief. It doesn’t follow that basic beliefs about morality, God, etc. are irrational until such evidence is adduced; but perhaps one’s epistemic status on these matters can be improved by obtaining confirming evidence. This would make Reformed epistemology a paradigmatic example of the Augustinian view of faith and reason: fides quaerens intellectum (faith seeking understanding). Second, they ought to be open to contrary evidence to root out false beliefs. Given the likelihood that they could be wrong about these matters, they ought not close themselves off to the possibility of epistemic correction. If Reformed epistemologists are sincere truth-seekers, they should take the following stance:

The Rational Stance: Trust the deliverances of reason, seek supporting evidence, and be open to contrary evidence.

According to Reformed epistemology, evidence may not be required for belief in God to be rational. But, given the problem of error, it should nonetheless continue to play an important role in the life of the believer. Fides quaerens intellectum.

h. Objections to Reformed Epistemology

Reformed epistemology has been rejected for three primary reasons. First, some philosophers deny that we have a sensus divinitatus and so reject the parity argument. Second, some philosophers argue that Reformed epistemology is too latitudinarian, permitting the rational acceptability of virtually any belief. Gary Gutting calls this ‘the Great Pumpkin Objection’ because Charlie Brown could have written a defense of the sensus pumpkinus that is parallel to Plantinga’s defense of the sensus divinitatus. Finally, Reformed epistemology has been rejected because it has been perceived to be a form of fideism. Fideism is the view that belief in God should be held in the absence of or even in opposition to reason. According to this traditional definition of fideism, Reformed epistemology does not count as a form of fideism because it goes to great lengths to show that belief in God is rational. However, if one defines fideism as the view that belief in God may be rightly held in the absence of evidence or argument, then Reformed epistemology will be a kind of fideism.

4. Groundless Believing

With their emphasis on reason, very few philosophers aspire to fideism. Nonetheless, some major thinkers have denied that reason plays any significant role in the life of the religious believer. Tertullian’s rhetorical question, “What has Jerusalem to do with Athens?”, is meant to elicit the view that faith (the Jerusalem of Jesus) has little or nothing to do with reason (the Athens of Socrates, Plato and Aristotle). Tertullian would go on to say, “I believe because it’s absurd.” Pascal (1623-1662), Kierkegaard (1813-1855) and followers of Wittgenstein (late 20th C.) have all been accused of fideism (which is the philosophical equivalent of calling a US citizen a “commie” in the 1950s). Let us consider their positions.

Pascal’s wager brings costs and benefits into the analysis of the rationality of religious belief. Given the possibility that God exists and that the unbeliever will be punished with eternal damnation and the believer rewarded with eternal bliss, Pascal argues that it is rational to wager that God exists. Using a rational, prudential decision procedure he asks us to consider placing a bet on God’s existence. If one bets on God, then either God exists and one enjoys an eternity of bliss or God does not exist and one loses very little. On the other hand, if one bets against God and wins, one gains very little, but if one loses that bet, then the one will suffer in hell forever. Prudence demands that one should believe in God’s existence. Pascal concludes: “Wager, then, that God exists.”

Pascal’s wager has been widely criticized, but we shall only consider here the relevance of the wager to Pascal’s view of faith and reason. The wager is just one of his many tools for shocking people into caring about their eternal destinies. After arguing that our desires affect our abilities to discern the truth, he tries to get our desires appropriately oriented toward the truth. The wager can stimulate the desire to seek the truth about God and, after one’s desires are changed, the ability to judge the evidences for Christianity properly. So, in spite of the prominence of the wager and its apparent disregard for evidence, Pascal appears to be a kind of evidentialist after all (but not a classical foundationalist).

Søren Kierkegaard’s emphasis on the role of inwardness or subjective appropriation has played a role in his being understood as a fideist. His reaction against both rationalism and dogmatism led him to view faith as a certain madness, a “leap” one makes beyond what is reasonable (a leap into the absurd). Some philosophers argue that Kierkegaard is simply emphasizing that faith is more than rational assent to the truth of a proposition, involving more fundamentally the passionate commitment of the heart.

Finally, followers of the enigmatic Ludwig Wittgenstein have defended the groundlessness of belief in God, a view that has been called “Wittgensteinian fideism.” Wittgenstein’s later works both noticed and affirmed the tremendous variety of our beliefs that are not held because of reasons—such beliefs are, according to Wittgenstein, groundless. Many of Wittgenstein’s most prominent students are religious believers, some of whom took his general insights into the structure of human belief and applied them to religious belief. Norman Malcolm, for example, favorably compares belief in God to the belief that things don’t vanish into thin air. Both are part of the untested and untestable framework of human belief. These frameworks form the system of beliefs within which testing of other beliefs can take place. While we can justify beliefs within the framework, we cannot justify the framework itself. The giving of reasons must come to an end. And then we believe, groundlessly.

5. Conclusion

Is belief in God rational? The evidentialist objector says “No” due to the lack of evidence. Theists who say “Yes” fall into two main categories: those who claim that there is sufficient evidence and those who claim that evidence is not necessary. Theistic evidentialists contend that there is enough evidence to ground rational belief in God, while Reformed epistemologists contend that evidence is not necessary to ground rational belief in God (but that belief in God is grounded in various characteristic religious experiences). Philosophical fideists deny that belief in God belongs in the realm of the rational. And, of course, all of these theistic claims are widely and enthusiastically disputed by philosophical non-theists.

In Western European countries, religious belief has waned since the time of the Enlightenment. Yet there are counter trends. Today over 90% of Americans profess belief in a higher power. In China, after decades of institutionally enforced atheism, religious belief is dramatically on the rise. And even though religious belief has waned among professional Anglo-American philosophers since the Enlightenment, many prominent Anglo-American philosophers are theists. What conclusions can be drawn from these sociological observations? That Reason will eventually triumph over superstition as all countries eventually follow Western Europe’s lead? That irrational religious belief is so stubbornly tenacious that Reason is incapable of wiping it out? That the natural tendency to believe in God is overlaid by various forms of sin (such as greed in the West or wicked Communism in the East)? That once the evidence is made clear to a deprived peoples, rational belief in God will flourish? Of course, these sociological facts are irrelevant to discussions of rational belief in God. Yet they are relevant to this: the persistence of religious belief in various contexts will continue to spur discussions of and developments in the epistemology of the religious for succeeding generations.

See also the article "Religious Disagreement."

6. References and Further Reading

  • Adams, Robert Merrihew. The Virtue of Faith and Other Essays. Oxford: Oxford University Press, 1987.
  • Adams, Marilyn McCord and Robert Merrihew Adams, eds. The Problem of Evil. Oxford: Oxford University Press, 1990.
  • Alston, William. Perceiving God. Ithaca: Cornell University Press, 1991.
  • Brockelman, Paul T. Cosmology and Creation: The Spiritual Significance of Contemporary Cosmology. New York: Oxford University Press, 1999.
  • Clark, Kelly James. Return to Reason: A Critique of Enlightenment Evidentialism and a Defense of Reason and Belief in God. Grand Rapids: Eerdmans, 1990.
  • Craig, William Lane, and Quentin Smith. Theism, Atheism, and Big Bang Cosmology. Oxford: Oxford University Press, 1993.
  • Davis, Stephen. God, Reason and Theistic Proofs. Edinburgh: Edinburgh University Press, 1997.
  • Gutting, Gary. Religious Belief and Religious Skepticism. Notre Dame: University of Notre Dame Press, 1982.
  • Helm, Paul. Faith and Understanding. Edinburgh: Edinburgh University Press, 1997.
  • Hume, David. Dialogues Concerning Natural Religion. New York: Routledge, 1779/1991.
  • Huxley, T. H. Agnosticism and Christianity, and Other Essays. Buffalo, NY: Prometheus Books, 1931/1992.
  • Jordan, Jeff, ed. Gambling on God, Lanham MD: Rowman & Littlefield, 1994.
  • Le Poidevin, Robin. Arguing for Atheism: An Introduction to the Philosophy of Religion. New York: Routledge, 1996.
  • Murray, Michael, ed. Reason for the Hope Within. Grand Rapids: Eerdmans, 1999.
  • Plantinga, Alvin, and Nicholas Wolterstorff, eds. Faith and Rationality: Reason and Belief in God. Notre Dame: University of Notre Dame Press, 1983.
  • Plantinga, Alvin.. Warrant: The Current Debate. New York: Oxford University Press, 1993.
  • Plantinga, Alvin. Warranted Christian Belief. New York: Oxford University Press, 2000.
  • Plantinga, Alvin. Warrant and Proper Function. New York: Oxford University Press, 1993.
  • Russell, Bertrand. Why I Am Not a Christian, and Other Essays on Religion and Related Subjects. New York: Simon and Schuster, 1957. Swinburne, Richard. The Existence of God. New York: Clarendon Press, 1979.
  • Swinburne, Richard. Faith and Reason. New York: Oxford University Press, 1984.
  • Wainwright, William. Reason and the Heart: A Prolegomenon to a Critique of Passional Reason. Ithaca: Cornell University Press, 1995.Wolterstorff, Nicholas. Reason within the Bounds of Religion. Grand Rapids: Eerdmans, 1976.
  • Wolterstorff, Nicholas. Thomas Reid and the Story of Epistemology. New York: Cambridge University Press, 2001.
  • Zagzebski, Linda, ed. Rational Faith: Catholic Responses to Reformed Epistemology. Notre Dame: University of Notre Dame Press, 1993.

Author Information

Kelly James Clark
Calvin College
U. S. A.


Leave a Reply

Your email address will not be published. Required fields are marked *