Relying On Others An Essay In Epistemology Historical Developments

1. Background

In the long history of philosophy there have been comparatively few signs of social epistemology until recently. Treatments of such topics that would nowadays be subsumed under this heading have occurred in various periods (think of discussions of testimony by Hume and Reid), but they were never assembled into a single unified package. In the second half of the 20th century, however, philosophers and theorists of assorted stripes launched a variety of debunking movements aimed at traditional epistemology. Although most of these writers did not (at first) use the phrase “social epistemology”, it was the socializing direction of their thought that makes the phrase appropriate to their work. In the 1960s and 1970s there was a convergence of such thinkers who attacked the notion of truth and objectivity, a constellation that gained powerful influence in the academic community. The relevant authors included Thomas Kuhn, Michel Foucault, and members of the “Strong Programme” in the sociology of science. These writers sought to replace “rational” approaches to science and other intellectual activities with political, military, and/or other arational models of cognitive affairs. Many of them challenged the intelligibility of the truth concept, or challenged the feasibility of truth acquisition. In the social studies of science practitioners such as Bruno Latour and Steve Woolgar (1986 [1979]) rejected the ideas of truth or fact as traditionally understood. So-called “facts” they argued, are not discovered or revealed by science, but rather are “constructed”, “constituted”, or “fabricated” when scientific statements come to be accepted, or are no longer contested. “There is no object beyond discourse … the organization of discourse is the object” (1986: 73). Discourse being a social phenomenon, what they were saying, in effect, is that facts were to be eliminated in favor of social phenomena. (Whether social facts should also be eliminated is a question they didn’t address very clearly.)

Although few practicing philosophers of the period endorsed these ideas, at least one influential philosopher, Richard Rorty (1979), seemed to be a camp follower. He contrasted the conception of knowledge as “accuracy of representation” (which he rejected) with a conception of knowledge as the “social justification of belief” (1979: 170). His notion of “social justification”, it appears, simply amounted to the practice of “keeping the conversation going” (whatever this meant) rather than the classical project of pursuing “objective truth” or rationality (1979: 377).

Sharply departing from these debunking themes, contemporary social epistemology is substantially continuous with classical epistemology. It sees no need to reject or distance itself from the epistemological projects of the past. Even social practices, after all, can be—and often are—aimed at finding the truth. Such social practices have a hit-or-miss record; but the same could be said of individual practices. At any rate, epistemologists can engage in their traditional enterprise of appraising alternative methods in terms of their capacities or propensities to achieve this kind of goal. In social epistemology, however, the relevant “methods” might be social practices. Thus, the type of social epistemology one finds in today’s philosophical literature does not call for any large-scale debunking of classical epistemology. Such epistemology can survive, and even thrive, with an expanded conception of how the truth-goal (and the justification and rationality goals) can be served, namely, with the help of well-designed social and interpersonal practices and institutions.

Initial moves toward a positive form of social epistemology (as opposed to a debunking form) were begun in the mid-1980s, largely in response to the debunkers. Alvin Goldman offered promissory notes (1978: 509–510; 1986: 1, 5–6, 136–138) toward such a conception and then developed a more detailed objectivist program in a contribution to a special issue of Synthese edited by Frederick Schmitt (Goldman 1987). In that latter contribution, Goldman advocated an avowedly truth-oriented (“veritistic”) approach to social epistemic evaluation. In the same issue Steve Fuller (1987) pursued a line more akin to the debunkers, and elaborated his position the following year with a monograph (Fuller 1988). That same year Fuller launched a journal entitled Social Epistemology, which became a prime venue for science studies work. This path, however, does not express what most philosophers now pursue under the heading of social epistemology. The science-and-technology-studies model, as it is now called, continues to have much in common with the history and sociology of science (à la Kuhn especially) and rather little in common with traditional epistemology.

The decade of the 1990s (and its run-up) saw the publication of several monographs and chapter-length treatments of various branches of social epistemology, followed by a wide-angle depiction of the field as a whole. C.A.J. Coady’s (1992) book-length treatment of testimony was a core contribution, especially given testimony’s centrality to social epistemology. The same may be said of Edward Craig’s monograph, Knowledge and the State of Nature (1990). The final chapter of Philip Kitcher’s (1993) book The Advancement of Science was devoted to the organization of cognitive labor in science, building on a journal article of the same theme (Kitcher 1990). Kitcher highlighted diversity within scientific communities as an important tool in the pursuit of truth. Margaret Gilbert’s On Social Facts (1989) made a forceful case for the existence of “plural subjects”, a crucial metaphysical thesis that provides one possible foundation for group-oriented, or collective, social epistemology. Alvin Goldman published a series of papers applying social epistemology to a number of topics, including argumentation (Goldman 1994), freedom of speech (Goldman and Cox 1996), legal procedure (Talbott and Goldman 1998), and scientific inquiry (Goldman and Shaked 1991). His book Knowledge in a Social World (1999) showed how classical epistemology, with its focus on the values truth possession and error avoidance, could be applied to the social domain without abandoning its traditional rigor. Among the domains covered were testimony, argumentation, the Internet, science, law, and democracy. It sought to show how epistemology can have real-world applications even within a “veritistic” framework.

The years since 2000 have witnessed a surge of activity in social epistemology. This surge was encouraged by the 2004 launch of the journal Episteme, which is heavily dedicated to work in the field. The present entry will explore many of these developments in some detail; but it begins with a taxonomy of the field (based on Goldman 2010/2011) to help organize and distinguish the multifarious enterprises found under the umbrella of social epistemology

2. Giving Shape to the Field: A Taxonomy of Social Epistemology

Traditional epistemology focuses on individual agents and their doxastic states or attitudes. Doxastic attitudes are a sub-species of propositional attitudes, ones that make categorical or graded judgments concerning the truth or falsity of their propositional contents. A doxastic attitude is right or wrong—accurate or inaccurate—as a function of the genuine truth-value of its propositional content. In addition to assessing beliefs as accurate or inaccurate, token attitudes (e.g., George’s believing Q at time t) can be evaluated along various epistemic dimensions such as justified or unjustified, rational or irrational, and knowledge-qualifying or not knowledge-qualifying.

Traditional epistemology has primarily concerned itself with formulating criteria for the epistemic evaluation of individuals’ doxastic states. Such evaluations may be based on whether the attitude token comports with the agent’s evidence or whether it is produced by a reliable belief-forming process. Given that justification evaluation is the paradigm of individual epistemology, what is (or are) the paradigm task(s) for social epistemology?

There are different ways in which an epistemic activity can count as “social”. One such way is for an individual agent to base a doxastic decision on what we may dub “social evidence”, where by social evidence we shall understand evidence concerning the utterances, messages, deeds, or thoughts of other people. A great deal of evidence that epistemic agents possess, of course, does not involve others at all. Consider the proposition, “A poodle pooped on Sylvia’s doorstep”. If Sylvia’s evidence for this proposition is purely perceptual, social epistemology may have no occasion to weigh in on the matter. But if Sylvia doesn’t witness any such canine action (because she isn’t home at the relevant time), she might still believe the proposition—and believe it justifiedly—based on different evidence. Her next-door neighbor might relate the incident to her when she comes home. The justifiedness of Sylvia’s belief will then hinge on criteria for justified trust in testimony, a staple problem of social epistemology.

Here we can introduce a first branch of social epistemology:

Social Epistemology I:
Assessing the epistemic quality of individuals’ doxastic attitudes where social evidence is used.

The first branch of social epistemology, so characterized, subsumes two of the most intensively debated topics in the field: (A) the problem of testimony-based justification, and (B) the problem of peer disagreement. These topics will be addressed in due course.

Obviously, what makes the first branch of social epistemology social is not the character of the doxastic agents who are studied. Rather, it is the social character of the evidence (relative to the agent). The second branch of social epistemology, by contrast, is social in an altogether different way. It is social because the doxastic agent is a social, or collective, entity. This branch of social epistemology starts by assuming that there are group entities that possess doxastic attitudes analogous to those possessed by individual humans. That there are such agents is a question of ontology (or perhaps philosophy of mind). It is undeniable, however, that we often acknowledge such group subjects in everyday thought and speech. If we are philosophically content with this practice, then the adoption of such doxastic attitudes by various groups gives rise to epistemological questions. Under what conditions are these entities justified in adopting a specified doxastic attitude, or making such a judgment? How does it depend—assuming it does so depend—on the various doxastic attitudes of the group’s members? Here, then, is a core problem for the second branch of social epistemology:

Social Epistemology II:
Assessing the epistemic quality of group doxastic attitudes (whatever their provenance may be).

A third branch of social epistemology has a wider assortment of manifestations. Within this branch the locus of activity ranges from social systems to social practices, institutions, or patterns of interaction. For example, a social system might formally select a pattern of rewards or punishments to motivate its actors or agents to engage in certain activities rather than others. Science as a social institution, for example, has adopted a reward system that confers honors, prizes, or credit to scientists and mathematicians who make important discoveries or prove major theorems. Legal systems adopt trial procedures designed to issue in judgments of defendants’ guilt or innocence. Choices among alternative procedures can be assessed in terms of how the chosen procedures “perform” in yielding judgments with high truth ratios. How often does a given trial system generate accurate judgments? How does it (or would it, if adopted) compare with alternative systems? Turning to science, how well does a given reward system function to motivate scientists to engage in fruitful inquiry that ultimately produces new knowledge?

Instead of deliberately adopted institutional arrangements, the same questions can be asked about alternative patterns of social interaction, which can also generate truth-linked consequences. Different patterns of communication, for example, and different choices of participants in a collective activity can vary in their degree of epistemic success. What are the best kinds of systems or practices? To what extent does the deployment of experts, for example, enhance a group’s accuracy, and how should relevant experts be identified? Some authors contend that diversity trumps expertise when it comes to group problem-solving. Is this correct? With these questions in mind, we can formulate the third branch of social epistemology as follows:

Social Epistemology III:
Assessing the epistemic consequences of adopting certain institutional arrangements or systemic relations as opposed to alternatives.

Under the aegis of this third branch of social epistemology, philosophers (and other professionals) can weigh the epistemic value of choosing one kind of institution or system rather than others. In the real world, of course, epistemic properties of an institution or system may not be the paramount properties to consider; they are certainly not the only ones of interest. But this doesn’t mean they should be neglected. Veritistic properties of a trial system, for example, are surely a major factor to consider when assessing a trial system’s level of success. It is generally conceded that we don’t want a system that commonly yields convictions of the innocent.

3. First Branch of Social Epistemology: Testimony and Peer Disagreement

3.1 Justification Conditions for Testimonial Belief

Epistemologists often speak of epistemic “sources”, which refers roughly to ways we can get knowledge or justified belief. Standard examples of such sources in traditional (individual) epistemology are perception, introspection, memory, deductive and inductive reasoning, and so forth. When turning to social epistemology, we quickly encounter an ostensibly new kind of source, viz., testimony. Knowledge or justification can be acquired, it seems, by hearing what others say or reading what they write (and believing it).

In the realm of epistemic sources, a distinction can be drawn between basic and non-basic (derived) sources. Vision is presumably a basic source of justification, but not all sources are basic. If testimony is also a source, is it a basic or non-basic source? David Hume argued for non-basicness. Although we are generally entitled to trust what others tell us, we are so entitled only in virtue of what we have learned from other (basic) sources. Here’s how the story goes, more fully. Each of us can remember many occasions on which people told us things that we independently verified (by perception) and found to be true. This reliable track record from the past—which we remember—warrants us in inferring (via induction) that testimony is generally reliable. From this we can conclude that any new instance of testimony we encounter is also likely to be true, (assuming we have no defeaters). As James Van Cleve formulates the view,

testimony gives us justified belief … not because it shines by its own light, but because it has often enough been revealed true by our other lights. (Van Cleve 2006: 69)

This sort of view is called reductionism about testimony, because it “reduces” the justificational force of testimony to the combined justificational forces of perception, memory, and inductive inference.

More precisely, this view is usually called global reductionism, because it allows hearers of testimony to be justified in believing particular instances of testimony by inferential appeal to testimony’s general reliability. However, global reductionism has come under fire. C.A.J. Coady argues that the observational base of ordinary epistemic agents is much too small and limited to allow an induction to the general reliability of testimony. Coady writes:

[I]t seems absurd to suggest that, individually, we have done anything like the amount of field-work that [reductionism] requires … many of us have never seen a baby born, nor have most of us examined the circulation of the blood nor the actual geography of the world …nor a vast number of other observations that [reductionism] would seem to require. (Coady 1992: 82)

An alternative to global reductionism is local reductionism (Fricker 1994). Local reductionism does not require a hearer to be justified in believing that testimony is generally reliable. It only requires a hearer to be justified in believing that the particular speaker whose current testimony is the target is reliable (or reliable—and sincere—about the specific topic she was addressing). This is a much weaker and more easily satisfied requirement than that of global reductionism.

Local reductionism may still be too strong, however, but for a different reason. Is a speaker S trustworthy for hearer H only if H has positive evidence or justification for the reliability of this particular speaker S? This is far from clear. If I am at an airport or a train station and hear a public announcement of the gate or track for my departure, am I justified in believing that testimony only if I have evidence for the announcer’s general reliability (or even her reliability about departures)? I do not normally gather such evidence for a given public address announcer, but surely I am justified in trusting such announcements.

Given these problems for both kinds of reductionism, some epistemologists embrace testimonial anti-reductionism (Coady 1992; Burge 1993; Foley 1994). Anti-reductionism holds that testimony is itself a basic source of evidence or justifiedness. No matter how little positive evidence a hearer has about the reliability and sincerity of a given speaker, or of speakers in general, she has default or prima facie warrant in believing what the speaker says. This thesis is endorsed, for example, by Tyler Burge, who writes:

[A] person is entitled to accept as true something that is presented as true and that is intelligible to him, unless there are stronger reasons not to do so. (Burge 1993: 457)

Experience, of course, can provide defeaters for such prima facie justification, so that, on balance the receiver may not be justified. Absent such defeaters, however, justification arrives “for free”. The hearer needs no positive reason for believing the speaker’s report.

According to anti-reductionism, then, a hearer doesn’t need positive support for testimonial reliability, or the speaker’s sincerity, to justifiedly believe what the speaker says. Only a weaker condition is imposed that the hearer not have evidence that defeats the speaker’s being reliable and sincere. Since this negative requirement is extremely weak, many anti-reductionists add an additional requirement, i.e., that the speaker actually be competent and sincere. However, Jennifer Lackey (2008: 168 ff) argues that these conditions do not suffice for hearer justifiedness. Suppose Sam sees an alien creature in the woods drop something that seems to be a diary, written in a language that appears to be English. Sam has no evidence for or against the sincerity and reliability of aliens as testifiers, so he lacks both positive reasons for trusting the diary’s contents and negative reasons against trusting them. Anti-reductionism implies that if the alien is both reliable and sincere, Sam is justified in believing the diary’s contents. Intuitively, however, this is dubious.

Reductionism and anti-reductionism both assume that testimonial beliefs can be justified because testimony provides evidence for the truth of what is asserted. Proponents of the assurance or interpersonal view of testimony (Ross 1986; Hinchman 2005; Moran 2006; Faulkner 2007; Fricker 2012; Zagzebski 2012) reject this assumption. On their view, testimonial beliefs are justified not (or not only) because testimony is evidence, but because testimony is assurance. More precisely, testimonial justification has its roots in the fact that the speaker takes responsibility for the truth of her assertion (Moran 2006) or invites the hearer to trust her (Hinchman 2005). The assurance view is motivated by the following line of argument. As Moran points out,

when the hearer believes the speaker, he not only believes what is said but does so on the basis of taking the speaker’s word for it. (2006: 274)

We believe what the speaker says on the ground of her assurance that what she says is true. “Evidential” accounts of testimonial justification have difficulties explaining this phenomenon. If all that matters for testimonial justification is that the speaker be a reliable indicator of the truth, the fact that she is inviting us to trust her should be epistemically superfluous. Proponents of the assurance view conclude that the speaker’s assurance provides a distinctive epistemic but non-evidential kind of reason for believing her assertion.

Lackey (2008) and Schmitt (2010) raise an important problem for the assurance view. Perhaps the fact that a speaker invites us to trust her provides a distinctive kind of reason to accept her testimony. But it is not clear at all that the kind of reason in question is epistemic (rather than ethical or prudential). Lackey (2008) makes the point through the following example. Ben tells Kate that their boss is having an affair with an intern. Earl is eavesdropping on their conversation. On the basis of Ben’s testimony, both Kate and Earl form the belief that their boss is having an affair. On the assurance view, their respective beliefs should have different epistemic statuses. Since Ben was addressing Kate but not Earl, Kate has a distinctive epistemic reason to believe Ben that Earl lacks. However, if both Kate and Earl are functioning properly, have the same background information, etc. the claim that their respective beliefs have different epistemic values or statuses is implausible. To be sure, the fact that Ben was inviting Kate but not Earl to trust him does give rise to certain asymmetries between Kate and Earl. For instance, if it is revealed that Ben was lying, Kate is entitled to feel betrayed, while Earl isn’t. But it is dubious that these asymmetries have any epistemic significance.

Another important question in the epistemology of testimony is whether testimony can generate rather than merely transmit knowledge. It is tempting to regard testimony as transmission of information from speaker to hearer. This may be analogous to what transpires in (an individual’s) memory when justifiedness or knowledge possession is passed from an earlier to a later time within a particular subject. If a person retains a belief in memory from an earlier to a later time, the belief’s justificational status at the later time will normally be the same as its justificational status at the earlier time, so long as no new evidence is encountered during the interval. In other words, memory serves as a device for preserving justifiedness from one time to another. Testimony might have an analogous property. It might be the transmission of justifiedness and/or knowledge across individuals. If this were right, however, it would imply that a hearer cannot know p or be justified in believing p as a result of testimony unless that same proposition is known by (or justified for) the speaker. Is this correct?

Lackey (2008: 48–53, 2011: 83–86) argues to the contrary with several examples, one of which features a creationist teacher. A certain teacher is a devout creationist who does not believe in evolutionary theory. Nonetheless, she has the responsibility to teach her students that Homo sapiens evolved from Homo erectus, and she does so teach them although she doesn’t believe it herself. Because she doesn’t believe it, it isn’t something she knows, because knowledge requires belief (in the knower). Nonetheless, the students come to believe the proposition that Homo sapiens evolved from Homo erectus from the evidence the teacher presents, and they are justified by the evidence, so they thereby come to know this proposition. But the scenario cuts against the transmission thesis, because the hearers acquire knowledge despite the fact that the teacher doesn’t know.

3.2 Learning from the Testimony of Experts

The previous sub-section addressed the basic or generic problem of testimony. In this sub-section and the next one we examine two “spin-offs” of the generic problem of testimony.

In every society there are topics on which some people have substantially greater expertise than do others. When it comes to medical matters or financial investments, some people have special training and/or experience that most people lack. An expert in any domain will know more truths and have more evidence than an average layperson, and these things can be used to form true beliefs about new questions concerning the domain. In addition, laypersons will commonly recognize that they know less than experts. Indeed, they may start out having no opinion about the correct answer to many important questions; and feel hesitant in trying to form such opinions. They are therefore motivated to consult with a suitable expert to whom they can pose the relevant question and thereby learn the correct answer. In all such cases, one seeks an expert whose statements or opinions are likely to be true.

But are laypersons in a position to recognize who is a (relevant) expert? Even genuine experts often disagree with one another. That is why a wise layperson won’t necessarily accept the first piece of testimony he receives from a putative expert, but will often seek a “second opinion”. But what should he do if the second opinion conflicts with the first? Can a layperson justifiedly identify which (professed) expert to trust? This is called the novice/two experts problem (Goldman 2001/2011).

A fundamental problem facing the layperson is that genuine expertise often arises from knowledge of esoteric matters, matters of which most people are ignorant. Thus, even when a layperson listens carefully to someone professing great expertise, the layperson may be at a loss to decide whether the self-professed expert merits much trust. Goldman considers several ways by which a layperson might try to choose (justifiedly) between two or more disagreeing experts. Let us review three of these ways. One is to arrange a “debate” between the self-professed experts. The layperson would hear the criticisms and responses by each participant and try to decide who has the better argument. It is not obvious, however, how the layperson can do this. Many premises asserted by the experts are likely to be esoteric and therefore difficult—if not impossible—for the layperson to assess. Both the truth-values of the asserted premises and the strength of evidential support they confer on the conclusions will be difficult for the layperson to assess.

Another way for a layperson to choose among experts is to inquire which position endorsed by one of them is most common among all (professed) experts. But how significant is it that one expert belongs to a more numerous camp? This is a function of the dependence or independence relations between the consulted experts and other experts in the field. If one expert, for example, belongs to a large following behind a “guru”, who charismatically persuades many people to agree with him uncritically, it may not matter how numerous they are. The sameness of their opinions does not really add much unless the followers have used enough independence to be at least partially conditionally independent of one another (Goldman 2001/2011: 121–124).

A third way to assess the comparative trustworthiness of rival experts is by comparing their respective track-records: how often has each expert correctly answered past questions in the domain? The problem here is how a layperson could assess an expert’s past track-record. Laypersons will typically lack knowledge and justification about whether the past answers were correct. Arguably, the situation is not quite as bleak as it seems initially. Suppose that a putative expert in astronomy predicts a solar eclipse at a certain time and place fifteen years hence. At the time of prediction, a layperson could not say whether she gets her prediction right, so this is no help for estimating the putative expert’s track record. But if the layperson, fifteen years later, is in the right place at the right time, he can observe whether or not a solar eclipse occurs then. So expert statements are not inevitably beyond the verificational capacities of laypersons.

3.3 Can “Silence” Support Justified Belief?

The previous two sub-sections provide examples in which a doxastic agent decides whether to believe a proposition based on another person’s assertion. This sub-section introduces an epistemic situation in which a doxastic agent receives no testimony from any source but regards the very absence of testimony as evidence in its own right. This situation is described by Sanford Goldberg (2010, 2011).

As a point of comparison, some theorists of testimony (e.g., Lipton 1998) hold that a hearer is justified in believing P in virtue of somebody’s testimony-that-P just in case P’s being true is part of the best explanation of the person’s testifying-that-P. Analogously, the absence of testimony that P might serve as a “negative” piece of evidence for the truth of not-P in special circumstances in which not-P is part of the best explanation of the silence that is “heard”.

Goldberg characterizes the type of inference in question as having the following form: “P must not be true, because if it were true, I would have heard it by now”. This type of inference is reasonable in a special type of social situation. If you are a regular consumer of online news, occurrences like catastrophes and media sensations will be broadcast widely and you will rapidly become apprised of them. If you haven’t gotten wind of any such event in the last twelve hours, it is reasonable to infer that no such events have occurred. In such circumstances, silence can be as informative as a verbal message.

A precise description of the kind of standing communication system that warrants this kind of inference by a suitably positioned agent is a delicate matter, which Goldberg explores (2011: 96–105). First, he explains there must exist a “source” that regularly reports to a community about a certain class of events. Second, the doxastic subject of interest must be attuned to this source, so that s/he will receive such a report in a timely fashion. This and other similar conditions are proposed as sufficient conditions to confer warrant on the subject in believing that no such event has occurred if there has been silence from the source. Details aside, this does seem to be a legitimate type of “testimony”-based belief where the so-called testimony is really an absence of testimony. Nonetheless, this fits our general characterization of the first branch of social epistemology as a branch that studies warrant based on social evidence. Given the features of the social communication system sketched above, silence qualifies as a kind of social evidence.

3.4 Peer Disagreement

The cases surveyed thus far in this section involve substantial epistemic asymmetry between the agent and her source of information. Interesting questions also arise when we turn to situations involving epistemic symmetry between agents. Suppose that two people form conflicting beliefs about a given question: one believes p while the other believes not-p. Suppose moreover that they share all their evidence relevant to the question. Finally, suppose that each believes that they are epistemic peers: that they have equally good perceptual abilities, reasoning skills, and so on. Obviously they cannot both be correct in their beliefs; the two propositions believed are contradictory. But can they be rational to hold fast to their initial beliefs, now that they know they have the same evidence and respect one another as equally good reasoners? How (if at all) should they proceed to revise their initial assessments in light of their disagreement? This is the problem of peer disagreement.

Responses to this problem have tended to fall on either side of the following spectrum. At one end are “conciliationist” or “equal weight” views, according to which two peers who disagree about p should subsequently become substantially less confident in their opinions regarding p. At the other end of the spectrum are “non-conciliationist” or “steadfast” views, on which one is not rationally required to change one’s view in the face of peer disagreement. In the middle of the spectrum, one finds views that agree with conciliationism in certain cases, and with steadfastness in others.

Conciliatory views are motivated by cases like the following one, adapted from Christensen (2007). (Similar examples are used by Feldman (2007) and Elga (2007) in defense of conciliationism.) You and your friend have been going out to dinner together for several years. Each time you add a 20% tip and split the check; upon receiving the check you each do the calculation in your head. Over the years, you and your friend have been right equally often, so that you regard each other as epistemic peers when it comes to determining your share. Tonight, after doing the math in your head, you conclude that your share is $43, and become confident of this. But your friend announces that she is quite confident that your share is $45. Here it seems quite obvious that upon learning of your disagreement, you should become substantially less confident in your belief that the share is $43; in fact, you should become equally confident that the share is $45. After all, your disagreement is evidence that one of you has made a mistake; and you have no particular reason to suppose that your friend is the one who made a mistake. Under these circumstances, lowering your confidence that the share is $43 seems the only reasonable attitude. And of course the same holds for your friend, mutatis mutandis.

Feldman (2007) offers an influential, more abstract argument for conciliationism on the basis of the “uniqueness thesis”. This is the view that for any proposition p and any body of evidence E, exactly one doxastic attitude is the rational attitude to have toward p on the basis of E, where the possible attitudes include believing p, disbelieving p and suspending judgment. Feldman’s argument seems to be the following. If the uniqueness thesis is true, it follows that in cases where I believe p and my peer believes not-p at least one of us must have formed an irrational opinion. Since I have no good reason to believe that I am not the one who responded improperly to the evidence, the only rational option seems to be to suspend judgment about p. Christensen (2007) offers a similar argument formulated in terms of credences rather than all-or-nothing doxastic attitudes. (Note that the uniqueness thesis by itself doesn’t entail conciliationism. One might endorse uniqueness but hold that if my initial belief that p was the proper response to the evidence, the evidential impact of peer disagreement on my belief is nullified, so that I am rational in holding fast to my initial belief. See Kelly 2010.)

One problem with this argument is that the uniqueness thesis is a very strong and controversial thesis. In particular, when the evidence bearing on p is meager, it seems implausible to hold that only one doxastic attitude toward p is permitted. Kelly (2010) turns this into an argument against conciliationism, by arguing that conciliationism entails uniqueness. Whether it does so, however, is a matter of debate (see Ballantyne and Coffman 2012).

Proponents of steadfast views often motivate their position by pointing to alleged problems for conciliationism. One issue is the degree of skepticism to which conciliationism seems to lead. For every political, philosophical or religious view one may endorse, there are competent and well-informed people who disagree. The implication that one should become an agnostic in all these areas of controversy is worrisome.

A second issue for conciliationism is that it seems to demand “throwing away evidence” (Kelly 2005). Conciliationism, the objection goes, is too insensitive to the possibility that one of the parties initially reasoned well while the other didn’t. If one peer was epistemically more virtuous than the other in arriving at her initial opinion, this should be relevant to what attitudes are rationally required of each peer upon learning of the disagreement. Conciliationism, however, seems to imply that the epistemic quality of the process by which each peer arrived at her initial opinion has no bearing on the correct attitudes for them to adopt after learning of their disagreement. Each one is rationally required to move her views in the other’s direction, regardless of whether or not they initially reasoned correctly.

A third problem for conciliationism is that it seems self-undermining. Since the epistemic significance of disagreement is itself a matter of controversy, it seems that a proponent of conciliationism should become much less convinced of its truth upon learning about this disagreement. One may worry that there is something wrong with a principle that tells you not to believe in its own truth.

Conciliationists have offered responses to each of these three objections. Elga (2007) responds to the charge that conciliationism leads to widespread skepticism in the following way. For many controversial topics, he argues, disagreement involves a large number of interconnected issues. If two people disagree about the morality of abortion, they will likely disagree on many connected normative and factual matters as well. Under these circumstances, he argues, neither is in a position to regard the other as an epistemic peer. Christensen (2007) discusses the charge that conciliationism requires throwing away evidence. He contends that when conciliationists advocate “splitting the difference”, they do not mean that such revision of credences guarantees full rationality. Instead, conciliationism is a view about the bearing of one kind of evidence: evidence regarding what epistemic peers believe on a certain subject matter. This sort of evidence should indeed be taken into account, but it isn’t the whole story, and conciliationism doesn’t advocate ignoring the epistemic quality of the steps taken by agents in forming their initial beliefs. Elga (2010) discusses the problem of self-undermining and argues for a view on which conciliationism is the right response to cases of peer disagreement, except when the controversy is about how to respond to disagreement. This restriction, he claims, is not ad hoc, because any fundamental epistemic policy or rule must be dogmatic about its correctness on pain of incoherence.

A second, more positive motivation for steadfastness is the thought that, contrary to what the uniqueness thesis says, a single body of evidence can rationalize more than one doxastic attitude. Thus Gideon Rosen (2001: 71) writes:

It should be obvious that reasonable people can disagree, even when confronted with a single body of evidence. When a jury or a court is divided in a difficult case, the mere fact of disagreement does not mean that someone is being unreasonable.

Rosen expands on this view by arguing that epistemic norms are permissive norms, not obligating or coercive norms. Thus, even when two people share the same evidence, it is permissible for one to adopt one doxastic attitude toward a proposition and for the other to adopt a different attitude (also see Pettit 2006).

A third motivation for steadfastness is the idea that from the first-person perspective, there is an important epistemic asymmetry between me and my peer. As Wedgwood (2007) points out, when forming opinions I am guided directly by my experiences and other beliefs, and only indirectly (if at all) by other people’s epistemic states. According to Wedgwood, this asymmetry makes it rational for me to be epistemically biased in my favor. In cases of peer disagreement, I am therefore justified in sticking to my guns, even if I have no independent reason for thinking that I (rather than my peer) got things right.

A fourth motivation for steadfast views is that in certain cases they give intuitively more plausible results than conciliationism. Consider Christensen’s restaurant case, but suppose this time that after having done the math in your head, you double-check the result using pen and pencil and a reliable calculator. Each time the result is $43, so that I become extremely confident that this is our share. Your peer, who has done the same thing, then announces he believes that the share is $45. Intuitively, under those circumstances you are rational in discounting your peer’s opinion and holding fast to your initial belief. (This variation on the restaurant case is due to Sosa 2010.)

Recently, treatments of peer disagreement have emerged that are neither strictly conciliatory nor steadfast. Those views agree with conciliationism in certain cases, and with steadfastness in others. The two main approaches in this vein are Kelly’s (2010) total evidence view and Lackey’s (2010) justificationist view. According to the total evidence view, what reaction to peer disagreement is reasonable depends both on the quality of one’s original evidence and on the amount of evidence provided by the fact that my peer disagrees. When the original evidence is relatively weak, it is swamped by the evidence provided by the disagreement. In such cases, the total evidence view gives the same verdicts as conciliationism. Conversely, the more substantial the original evidence, the less substantial the epistemic impact of peer disagreement, and the more rational it is to stick to one’s guns. On Lackey’s justificationist view, how one should respond to peer disagreement depends on one’s degree of justified confidence before learning of the disagreement. In cases where this initial degree is relatively low, Lackey’s view agrees with conciliationism. In cases where one’s degree of justified confidence is high, such as Sosa’s restaurant case mentioned above, it is rational to remain very confident in the truth of one’s original belief.

4. Second Branch of Social Epistemology: The Nature and Epistemology of Collective Agents

4.1 The Existence and Variety of Group Doxastic Agents

It is extremely common to ascribe actions, intentions, and representational states to collections or groups of people. We might describe an army battalion as setting out on a mission that was chosen because of what the unit thought its enemy was planning to do. A government might be described as refusing to recognize a foreign dictator because, it believes, his recent “election” was fraudulent. In short, we ascribe representational states to collective entities, including motivational and informational states, despite the fact that they are not individual human beings. We make similar ascriptions to colonies, swarms, hives, flocks, and packs of animals. In all of these cases, it is debatable whether our ascriptions are predicated on genuine convictions that the collective entities literally have representational states, or whether we merely speak metaphorically. For present purposes it will be assumed that such talk should be taken seriously rather than metaphorically. For social epistemological purposes, it is the ascription of group doxastic states in particular that is essential to the second branch of the enterprise.

The rest of this section, then, presupposes that human groups exist and enjoy “intellectual” attitudes such as belief, disbelief, and suspension of judgment. Social epistemology is especially interested in how their epistemologies work. Under what conditions do collective beliefs attain such statuses as knowledge or justifiedness? We shall focus on the latter.

We begin, however, with questions about social metaphysics. A major sub-question here is how group entities relate to their members. One approach to this relationship is a so-called “summative” account (a term that is used rather variously by different authors). Here is one articulation of the summative approach.

  • (S) A group G believes that P if and only if all or most of its members believe P.

As Margaret Gilbert (1989) points out, however, this is too weak a condition. Two committees might have the very same membership, for example, the Library Committee and the Food Committee. Every member of the Library Committee might believe that the library has a million volumes, and so might the Library Committee itself. Every member of the Food Committee will also have the same belief. But the Food Committee does not have this belief because it doesn’t make judgments on that subject.

Gilbert (1989: 306) therefore formulates and embraces another theory, seconded by Frederick Schmitt (1994a: 262), called the Joint Acceptance account:

  • (JAA) A group G believes that P just in case the members of G jointly accept that P, where the latter happens just in case each member has openly expressed a willingness to let P stand as the view of G, or openly expressed a commitment jointly to accept that P, conditional on a like open expression of commitment of other members of G.

As Alexander Bird (2014) points out, on this model of group belief the members of a group will be mutually aware of one another as members of the group and aware of the group’s modus operandi. Hence, it might be called the mutual awareness model (following Philip Pettit 2003).

Bird contrasts the mutual awareness model with a distributed model. A distributed model deals with systems that feature information-intensive tasks which cannot be processed by a single individual. Several individuals must gather different pieces of information while others coordinate this information and use it to complete the task. A famous example is provided by Edwin Hutchins (1995), who described the distribution of tasks on a large ship, where different crew members take different bearings so that a plotter can determine the ship’s position and course. The key feature of such examples is that the task is broken into components that are assigned to different members of the group. Members in such distributed systems will not ordinarily satisfy the conditions of the commitment, or mutual awareness, model. In particular, Bird argues, science instantiates a distributed system. This is what makes it legitimate to represent science as a social agent, a subject that possesses (inter alia) knowledge.

Clearly, there are multiple conceptions of how sets of individuals might “compose” a group agent, and each of these conceptions can legitimately apply to real cases. A dichotomy between “summativism” and “non-summativism” may be inadequate to capture the multiplicity of the phenomena. This may complicate the social epistemologist’s task of providing an account of social epistemic statuses. But so be it; life is complicated. In discussing summativism (as we will in section 4.3) we must be careful to distinguish between summativism about belief versus summativism about justification. In this section and the next we discuss how beliefs of a group agent are determined or constituted by beliefs of all, most, or some of its members. This topic is usually pursued under the heading of belief aggregation.

4.2 Problems of Belief Aggregation

Christian List and Philip Pettit (2011) explore complications of belief aggregation under the heading of “judgment” aggregation. They preface their discussion with the following depiction of the metaphysical relation between group attitudes (and actions), on the one hand, and members’ attitudes (and actions), on the other:

The things a group agent does are clearly determined by the things its members do; they cannot emerge independently. In particular, no group agent can form propositional attitudes without these being determined, in one way or another, by certain contributions of its members, and no group agent can act without one or more of its members acting. (2011: 64)

As indicated earlier, the “attitudes” of special interest to social epistemology are doxastic attitudes, principally belief. List and Pettit reflect on what might be a suitable belief aggregation function, a mapping from profiles of members’ beliefs into group beliefs. Are there plausible functions that a social epistemologist should be happy to endorse?

A sticky problem arises in this territory, illustrated by the so-called “doctrinal paradox” (Kornhauser and Sager 1986). Suppose that a three-membered court has to render a judgment in a breach-of-contract case. The court needs to make a judgment on each of three (related) propositions, where the first two are premises and the third is a conclusion.

  1. The defendant was legally obliged not to do a certain action.
  2. The defendant did that action.
  3. The defendant is liable for breach of contract.

Legal doctrine entails that obligation and action are jointly necessary and sufficient for liability. So conclusion (3) is true if and only if the two premises are both true. Suppose, as shown in the table below, that the three judges form the indicated beliefs, vote accordingly, and that the judgment aggregation function is guided by majority rule, so that the group (or court) believes (and votes) as shown below:

Obligation?Action?Liable?
Judge 1 True True True
Judge 2 True False False
Judge 3 False True False
Group True True False

As is apparent, although each of the three judges has consistent beliefs, and the aggregation proceeds by an ostensibly unproblematic majority rule provision, the upshot is that the court’s beliefs are inconsistent. Given the legal doctrine, it is impossible for the defendant to have had the obligation and done the action yet not be liable.

This kind of problem can easily occur whenever collective judgments are made on a set of connected issues. No plausible, simple principle like majority rule can generate a function in which the group attitude unproblematically reflects its members’ attitudes (List and Pettit 2011: 8). Indeed, List and Pettit (and others) have proved several impossibility theorems in which reasonable-seeming combinations of constraints are shown to be jointly unsatisfiable (List and Pettit 2011: 50; List and Pettit 2002).

No such impossibility results, however, have been produced for cases where the question is how a group attitude with respect to a single proposition depends on the members’ attitudes with respect to the same proposition (and nothing else). In the remainder of the discussion, therefore, we shall concentrate on that class of cases. It is also worth stressing that many groups operate in a highly “executive driven” style, where a chairperson, CEO, or “leader” of the group makes most of the decisions and actions on behalf of the group, and where his/her (individual) belief essentially fixes or comprises that of the group. This kind of case can still be treated under the heading of “aggregation”, where the leader’s opinion simply outweighs that of (all of the) other members. The social epistemologist renders no normative judgment about how these belief relations between members and groups play out. At this stage, at any rate, she only wishes to describe and take account of the varieties of belief aggregation. That is, she just studies the alternative attitudinal “psychologies” of organizations, collectives, or groups. The next stage of social epistemology would then consist of epistemic evaluations of group beliefs. These evaluations belong to a separate phase of the enterprise.[1]

4.3 Approaches to Group Justifiedness

We now turn to this next stage of social epistemology, where the primary question is not what determines group belief but what determines group justification. In the newly emerging literature on collective epistemology there are relatively few developed accounts of group justification. We shall examine three such approaches, beginning with the dialectical approach.

Many formulations of a dialectical approach to individual justification are found in the literature, including those of Annis (1978), Brandom (1983), Williams (1999), Kusch (2002), and Lammenranta (2004). It should not be surprising, therefore, to find dialectical approaches to group justification patterned on this work. One such proposal is made by Raul Hakli (2011).

… [A] collectively accepted group view that p [is] justified if and only if the group can successfully defend p against reasonable challenges by providing reasons or evidence that are collectively acceptable to the group and that support p according to the epistemic principles collectively accepted in the epistemic community of the group. (2011: 150)

One salient feature of this approach is its relativization of group justifiedness to the epistemic principles of the group’s own community. If a group’s community harbors a highly dubious set of principles—for example, principles that embrace reliance on oracles or astrology—this would authorize (in justificational terms) any so-called “reasons” or “evidence” that employ these methods. This passage in Hakli clearly implies that there is no higher epistemic standard than that of the local community. This extreme kind of relativism will not appeal to epistemologists who hanker for greater objectivity. (In addition, what does the approach say if a given group has multiple sub-communities with conflicting epistemic principles?)

In other parts of his discussion, Hakli offers a further requirement for dialectically based justification:

[I]n order for a group to form an epistemically justified view it should first collect all the evidence available to the group members and openly discuss the arguments for and against the candidate views before voting or otherwise making the decision concerning the group view. (2011: 136)

This is an extremely restrictive condition. What if a single member fails to volunteer a certain marginal item of evidence available to him, so that it goes undebated by the group as a whole? This should not prevent the group from becoming justified with respect to a proposition p based on a large and weighty body of further evidence possessed by other group members and discussed by the group.

In contrast with Hakli’s relativistic account of group justifiedness, consider an account in the reliabilist tradition, which makes truth conduciveness (hence objectivity) a key element of the theory. This style of approach is exemplified by Alvin Goldman’s (2014) “social process reliabilist” approach to collective justifiedness, patterned on his earlier account of individual justifiedness (Goldman 1979).[2] Goldman starts by distinguishing two ways in which a group might form beliefs. First, it might use processes of belief aggregation (see section 4.2 above), in which member beliefs vis-à-vis proposition p are somehow transmuted into a collective belief of the group vis-à-vis the same proposition. He calls this “vertical” belief formation. Second, the group might use an inferential process in which its own (collective) beliefs in other propositions q, r, and s lead it to form a new belief in p. This is called “horizontal” belief formation. Goldman focuses his attention on vertical, or aggregative, belief formation, since it is more fundamental and a more distinctive aspect of collective epistemology. In metaphysical terms, however, no group belief is ever (token) identical with any member belief (or set of member beliefs). However, it will be common for group beliefs to supervene on, or to be “grounded” in, beliefs of its members.

Given that groups engage in belief-formation of the vertical, or aggregative kind, a central question for group epistemology is how the justificational status of such group beliefs is fixed or determined. It is natural to assume that the justificational status of a group belief—at least when it is fixed in a vertical, or aggregative fashion—is a function of the justificational statuses of its members’ belief states (with respect to the same proposition). But what exactly is the functional relationship?

According to process reliabilism for individuals, justificational status for a belief is determined by the reliability of the psychological process(es) used by the agent in forming (or retaining) the belief. In process reliabilism for groups (in Goldman’s proposal), the justificational status of a group belief is, most directly, determined by the reliability (more precisely, the conditional reliability) of the aggregation process used, where aggregation processes take member belief states as inputs and group beliefs as outputs (all with respect to the same proposition). An example of such a process would be a majoritarian one: if more than 50% of the members believe p, the process generates a group belief in p as well.

However, this should not settle the deal if one wants to preserve a firm analogy between individual process reliabilism and group process reliabilism. In individual reliabilism, it isn’t sufficient for an output belief to be justified that it result from a (conditionally) reliable process. It is also necessary that the inputs to the process themselves be justified. In the case of inference, it isn’t sufficient that an agent use a reliable inferential process, i.e., one that usually carries true premises into true conclusions. The agent’s operative premises must also be justified. Applied to the collective belief case, this might mean that some appropriately high proportion of those members whose beliefs in p are causally responsible for the group’s formation of a belief in p were also justified. In other words, sufficiently many input beliefs (in p) must have been justified in order for the group’s output belief (in p) to be justified. Thus, members’ J-statuses with respect to their beliefs in p are determined by a history of belief-forming processes (some reliable and some unreliable). The J-status of the group belief in p is determined, in turn, by what proportion of its members believe p (as opposed to disbelieve or suspend judgment vis-à-visp), and what proportion of them hold their doxastic states justifiedly.

Goldman winds up with two specific proposals, which seek to accommodate degrees of justifiedness. The first principle is:

  • (1) If a group belief in p is aggregated based on a profile of member attitudes toward p, then (ceteris paribus) the greater the proportion of members who justifiedly believe p and the smaller the proportion of members who justifiedly reject p, the greater the group’s level, or grade, of justifiedness in believing p.

The second principle is:

  • (2) A group belief G that is produced by an operation of a belief-aggregation process π is justified only if (and to the degree that) π has high conditional reliability.

These principles are the core of the process reliabilist approach, though many details, of course, are omitted here.

A third approach to group justifiedness is presented by Jennifer Lackey (forthcoming). It is motivated and partly defended by reference to two generic competitors. One competitor is deflationary summativism, and the second is inflationary non-summativism. Lackey explains her terminology as follows. Inflationary non-summativism is a view that understands group justifiedness as a status that “floats freely” from the epistemic statuses of its members’ beliefs. By contrast, deflationary summativism is an approach that treats group justifiedness as nothing more than an aggregation of the justified beliefs of its members. Summativism is the thesis that a group’s justifiedly believing p is understood only in terms of some or all of G’s members’ justifiedly believing p.

An example of inflationary non-summativism is the “joint acceptance account” (JAA) defended by Frederick Schmitt (1994a). According to this approach,

A group G justifiedly believes that p if and only if G has good reason to believe that p and believes that p for this reason.

Furthermore,

G has reason r to believe p if and only if G would properly express openly a willingness to accept r jointly as the group’s reason to believe p. (Schmitt 1994a: 265)

Lackey first objects to JAA on the grounds that it is too strong. Not all members of a group need express willingness to accept a reason jointly in order for it to qualify as a group reason. Moreover, even weakening the requirement to some group members leaves the requirement too strong. Second, consider the tobacco company Phillip Morris and its board members, each of whom were aware of scientific evidence of the addictiveness of smoking and its links with lung cancer and heart disease. Each of these members—as well as the company as a whole—had a reason to believe that the company should put warning labels on its cigarette packages. Yet none of these members were willing to accept jointly (i.e., verbally and publicly) that labels should be put on its cigarette packages. So this “joint acceptance” test is not a proper criterion for having a reason, neither for members nor for a group.

Turning now to the deflationary summativist approach, Lackey views Goldman’s process reliabilist account as the most detailed version of this approach and therefore concentrates her criticisms of deflationary summativism on his proposals.[3] There are three main criticisms. First, she points out that an adequate account of group justifiedness cannot be attained without attending to the evidential relations that exist between members’ beliefs, as well as which of these beliefs are operative in generating the group belief. Second, she contends that group justification is constrained by certain epistemic obligations that arise from professional relationships among group members, a complication that the aggregative account lacks the ability to accommodate. Third, there can be cases of “defeating” evidence against group belief that the aggregative account does not accommodate. Detailed examples are given to illustrate these points.

Building on these problems confronting the previous two views, Lackey advances her own view, which she calls the “group epistemic agent account” (GEAA). It is expressed in two principles:

  • (1)A group, G, justifiedly believes that p if and only if a significant percentage of the operative members of G (a) justifiedly believe that p, and (b) are such that adding together the bases of their justified beliefs that p yields a belief set that is coherent.
  • (2) Full disclosure of the evidence relevant to the proposition that p, accompanied by rational deliberation among the members of G in accordance with their individual and group epistemic normative requirements, would not result in further evidence that, when added to the bases of G’s members’ beliefs that p, yields a total belief set that fails to make probable that p.

All of these proposals are significant and will doubtless be studied by everyone interested in collective epistemology. Some of the proposals, however, are perfectly compatible with the spirit of (some of) the rival views For example, it is surely right that a group belief’s justifiedness depends not only on the percentage of members who are justified in believing p but on whether those members are operative in producing the group’s belief. This factor, however, could cheerfully be incorporated into a process reliabilist account, especially because causation of belief is at the core of process reliabilism. Its omission from Goldman’s treatment seems more of an oversight than a weakness in the theory’s fundamental character.

5. The Third Branch of SE: Institutions and Systems

5.1 The Social Epistemology of Science

Since science is the paradigm of a knowledge-seeking enterprise, epistemology and philosophy of science are intimately connected. Up to the 1960s, epistemology of science was conducted in a largely individualistic fashion. It focused on individual agents rather than teams and communities of scientists, and paid little attention to the social norms and arrangements governing scientific activity. However, at least since the publication of Kuhn’s hugely influential The Structure of Scientific Revolutions (1962), the scientific enterprise has been studied from a more social point of view. Scientists, after all, are influenced by their colleagues; they work in teams competing and collaborating with each other; they follow social norms governing methodology, presentation of results, allocation of prestige, and so on. Social epistemology of science investigates how these social dimensions influence the epistemic outcomes of scientific activity.

Historically the first social epistemological studies of science were conducted by sociologists, not philosophers. Post-Kuhnian sociology of science (a tradition often called “social studies of science” or “science and technology studies”) departs largely from the concerns and convictions of traditional epistemology and philosophy of science by rejecting the classical epistemological notions of objective truth, justification and knowledge, and/or by attempting to debunk the epistemic authority of science.

The question whether social studies of science really count as social epistemology is a subtle one. Many researchers in this tradition simply ignore traditional epistemological concerns with truth, justification and rationality. Consider for instance the symmetry thesis, according to which true and false beliefs should be given the same kind of causal explanation (Barnes and Bloor 1982). This is a central idea of the “strong program” developed in the 1970s by the Edinburgh school, the most influential group in the social studies of science. Proponents of the symmetry thesis claim that whether or not a belief is true should play no role in explaining why people hold it. Thus they officially decline to make any judgment about the epistemic properties of a belief in giving a causal explanation for it. They claim that epistemic concepts like truth or justification are not useful for their purposes.

Nevertheless, researchers in the social studies of science can be regarded as social epistemologists of science because they often endorse or suggest debunking or skeptical views about the epistemic authority of science. That is, they make epistemologically significant pronouncements (in the classical sense of “epistemology”) that cast doubt on science’s status as a privileged source of truth, justified belief and knowledge.

First, researchers in the social studies of science tend to embrace a form of relativism about the traditional concepts of epistemic justification and rationality, by rejecting the idea of universal and objective epistemic norms. As Barry Barnes and David Bloor put it, “there are no context-free or super-cultural norms of rationality” (1982: 27). (Researchers in the social studies of science usually try to defend this view by appealing to the Duhem-Quine’s thesis and Kuhnian considerations about incommensurability.) One consequence of this form of relativism is that science has no special universal or objective epistemic authority. The claim that science is a better source of justified belief or knowledge about the world than tealeaves-reading holds only relative to our local, socially situated norms of justification. There are familiar problems with relativism about epistemic justification, however (see Boghossian 2006).

Second, historical case studies undertaken by members of the Edinburgh school attempt to show that scientists are heavily influenced by social factors “external” to the proper business of science. Thus Mackenzie (1981) argues that the development of statistics in the 19th century was heavily influenced by the interests of the ruling classes of the time (for similar studies, see Forman 1971 and Shapin 1975). Other social analyses of science try to show how the game of scientific persuasion is essentially a battle for political power, where the outcome depends on the number or strength of one’s allies as contrasted with, say, genuine epistemic worth. If either of these claims were right, the epistemic status of science as an objective and authoritative source of information would be greatly reduced. However, there is an obvious theoretical problem here. How can these studies establish the debunking conclusions unless the studies themselves have epistemic authority? The studies themselves use the very empirical, scientific procedures they purport to debunk. If such procedures are epistemically questionable, the studies’ own results should be in question. Members of the Edinburgh School sometimes deny that they are trying to debunk or undermine science. Bloor, Barnes and Henry (1996), for example, say that they “honour science by imitation” (1996: viii). However, as James Robert Brown (2001) points out, this claim is disingenuous. They cannot intelligibly propose a revolution and then deny that it would change anything (2001: 143).

Third, some sociological approaches to science claim to show that scientific “facts” are not “out-there” entities, but are mere “fabrications” resulting from social interactions. This metaphysical thesis is a form of social constructivism. This is a view suggested by Latour and Woolgar in their influential book Laboratory Life: The Construction of Scientific Facts (1986 [1979]). In discussing social constructivism, it is essential to distinguish between weak and strong versions. Weak social constructivism is the view that human representations of reality—either linguistic or mental representations—are social constructs. For example, to say that gender is socially constructed, in this weak version of social constructivism, is to say that people’s representations or conceptions of gender are socially constructed. Strong social constructivism claims further that the entities themselves to which these representations refer are socially constructed. In other words, not only are scientific representations of certain biochemical substances socially constructed, but the substances themselves are socially constructed. The weak version of social constructivism is quite innocuous. Only the thesis of strong social constructivism is metaphysically (and, by implication, epistemologically) interesting. However, there are severe problems with this metaphysical thesis, as Andre Kukla (2000) explains.

Although the debunking aspects of social studies of science have left analytic philosophers by and large unmoved, post-Kuhnian sociology of knowledge has convinced many philosophers of science that close attention to the actual social practices of scientists is required. As a result, a growing body of work in analytic philosophy of science investigates the epistemic effects of these social practices. By contrast to social studies of science, philosophers in this tradition stand in continuity with traditional epistemology, and make no attempt at debunking science’s epistemic authority on the basis of social considerations. In fact, they tend to argue that what makes scientific activity epistemically special is in part the fact that its social structure is particularly well-attuned to science’s epistemic goals. In particular, they stress the epistemic benefits of the reward system and division of labor peculiar to science.

The ground-breaking work here is due to Philip Kitcher (1990, 1993). The starting point of his work is the thought that there is a tension between individual and collective rationality in science. Consider a situation in which there are several available methods or “research programs” for tackling a scientific problem (for instance the structure of DNA). And suppose in addition that method I has more chance of succeeding than method II. Then if every scientist is motivated purely by doing the best science, she will choose to work on method I. However, Kitcher points out, it may be in the community’s best interest to “hedge its bets” and have a number of scientists working on the less promising method II. Kitcher points out that one can achieve the desired division of labor by adopting a certain reward scheme. On the relevant scheme, the reward of each scientist working on a successful program decreases as the number of people working on the program increases. (You may think of the reward as a fixed amount of prestige allocated equally between successful scientists.) Then if many people are already working on method I, new scientists will have an incentive to work on method II. Although the method has fewer chances of being successful, if it is successful the reward will be bigger. Kitcher argues that the actual reward system of science works in pretty much this way.

Michael Strevens (2003) develops a formal model of scientific activity similar to Kitcher’s but argues that Kitcher’s reward system won’t produce the best division of labor. Another reward system is both better and closer to the actual practice of science. This is the priority rule, according to which the first research program that discovers a certain result gets all the reward (in this case, prestige). The fact that the actual reward system of science follows the priority rule was first discovered by the sociologist Robert Merton (1957), who pointed out that the history of science is littered with severe priority disputes. Merton took the priority rule to be a pathology of scientific activity. On Strevens’s view, by contrast, the priority rule works as an incentive for scientists to adopt the division of labor most epistemically beneficial to society.

Kitcher and Strevens point out the epistemic effects of diversity in “methods” or “research programs”, which encompass model-building strategies, ways of conducting experiments, and so on. More recently, Weisberg and Muldoon (2009) have pointed out the benefits of another kind of cognitive diversity, namely variation in patterns and strategies of intellectual engagement with other research teams.

Weisberg and Muldoon consider three strategies of engagement with the activity of other scientists. Scientists who follow the “control” research strategy simply ignore what other scientists are doing: they do not take into account the results of others in deciding which research program to explore. “Followers” (as the name suggests) follow the methods of research adopted by their predecessors: if a research program has already been explored and yielded significant results, they will tend to adopt this program. “Mavericks” also take into account the results of others in their exploration strategy, but in the opposite way: if a method has already been explored, they will adopt a different one. The question is whether and how fast these various groups can discover significant scientific results.

To investigate this question, Weisberg and Muldoon build a model in which a topic of scientific inquiry is represented by a 3-dimensional “epistemic landscape”. The x and y axes represent research programs (in Kitcher’s and Strevens’s sense). The vertical z axis represents the scientific importance of the results attainable by the research program corresponding to the (x, y) position. Scientists discover the comparative epistemic significance of research programs by visiting patches of the landscape, i.e., by working within the research program represented by the patch.

Algorithmizing the three strategies allows Weisberg and Muldoon to run computer simulations to discover whether and how fast followers of these strategies can “climb the peak” of the landscape, i.e., discover significant results. Thus their work is an instance of an increasingly popular way to do social epistemology, namely using computer simulations of social-intellectual activities.

Through their simulations, Weisberg and Muldoon find out that large populations of control are good at finding patches with high degrees of epistemic significance, but this takes considerable time. Populations of followers fare worse than controls: they find peaks less frequently, and cover only a small portion of the regions of high epistemic significance. Mavericks fare better than controls: they find peaks more often and more quickly. The most interesting finding, however, concerns mixed populations of mavericks and followers: adding even a small number of mavericks to a population of followers boosts epistemic productivity. This is because when they interact, each strategy has a fruitful role to play. As a result, mixed populations not only discover peaks quickly but cover a lot of significant ground. Weisberg and Muldoon suggest that this situation (a few mavericks stimulating many followers) is close to what we observe in science. Thus, they provide further support for Kitcher’s insight that the cognitive division of labor in science is epistemically beneficial.

The work of Kevin Zollman (2007, 2010) is another example of the use of computer simulations in the social epistemology of science. Zollman investigates the following issue. Even though scientific diversity has epistemic benefits, the aim of scientific communities is eventually to arrive at a consensus on the right theory. But scientific consensus may occur prematurely. Suppose there are two competing theories T1 and T2 in a given scientific field. Even if T2 is the correct one the initial experiments may very well favor T1. If all scientists come to accept T1 as a result, the wrong hypothesis will become the consensual one. Zollman illustrates this with an example from physiology. At the end of the 19th century, there were two proposed treatments for peptic ulcer, one relying on the hypothesis that ulcer is caused by bacteria, the other on the hypothesis that it is caused by peptic acid. Initial results favored the latter hypothesis, so a scientific consensus formed around it, and the bacterial hypothesis was abandoned for a long time. We now know that the bacterial hypothesis is correct. So in this case the scientific community reached consensus too quickly.

Zollman uses computer simulations to explore how diversity can ensure that scientific consensus isn’t reached in this premature fashion. His computer simulations reveal interesting correlations between the structure of the communication network and whether scientists’ beliefs converge on the right hypothesis. Surprisingly, structures with less communication between scientists are correlated with scientists converging on the right hypothesis. In strongly connected networks, initial results that favor the wrong theory become quickly known by everybody, which increases the risk that a consensus will form against the right hypothesis. Structures with less communication make for a wider diversity in scientists’ beliefs by ensuring that even if initial results disfavor the right theory many agents will not become aware of them. Zollman mentions another feature that can ensure that consensus isn’t reached too quickly. Suppose that some of the scientists are dogmatic—they are strongly biased in favor of what is in fact the right theory. Then even if initial results favor the other hypothesis, these scientists will be less responsive to this piece of evidence and will continue investigating the correct theory. This is another illustration of the idea that prima facie detrimental features of the practice of science (reduced communication and dogmatism) may in fact be epistemically beneficial.

5.2 Democracy and Social Epistemology

Democracy is a widely touted institution, .but what is its connection to epistemology? In recent decades an influential movement has arisen in political theory called the “epistemic” approach to democracy. Its general claim is that what makes democracy a superior form of government has something to do with its epistemic properties. Aristotle referred to democracy as the “rule of the many”, as contrasted with “the rule of the few” (oligarchy) and “the rule of the one” (monarchy, or autocracy). What is better about the rule of the many? Aristotle writes:

[T]he many, who are not as individuals excellent men, nevertheless can, when they have come together, be better than the few best people … just as feasts to which many contribute are better than feasts provided at one person’s expense. (Politics III, 11, 1281a41–1281b, trans. Reeve 1998: 83)

At best this only hints at a possible answer. For many current theorists, however, a core feature of democracy is majoritarian rule, which consists of granting votes to the citizenry at large and letting the majority opinion expressed in such a vote prevail. Furthermore, according to the Condorcet Jury Theorem (CJT), established by the French Enlightenment figure Marquis de Condorcet, majority rule can greatly enhance a polity’s prospects for getting a true answer on a binary (yes/no) question. Omitting appropriate qualifications for the moment, CJT says that if all voters in the electorate are individually more likely than not to hold a true opinion in a two-option choice, then aligning the group judgment with the majority judgment makes it even more likely that the group will be right than any individual. As the size of the electorate increases, moreover, the likelihood of the majority being right rapidly approaches 1.0 as an asymptote.

Other ways of expressing a similar core idea is to speak of the power of information pooling, or “The Wisdom of Crowds” (Surowiecki 2004). A striking illustration is due to Francis Galton, who performed a little experiment at an agricultural fair in rural England. About 800 participants were invited to estimate the weight of a displayed ox. Few participants had accurate individual estimates, but the average estimate, 1197 pounds, was almost identical to the true weight of the ox, 1198 pounds.

Let us examine the CJT more carefully to see if it lives up to its billing. The above roll-out is actually rather misleading. It hints at the notion that majority voting is an unconditionally reliable (truth-conducive) method, whereas in fact it is only conditionally reliable. That is, the group tends to be right only if all voters are individually biased in the direction of truth, for example, have a probability of 0.52 of being correct. But there is no a priori guarantee that each voter will be individually biased toward the truth.[4] To state this otherwise, CJT does not imply unconditional reliability because such reliability does not follow when, for example, all individual voters have a prior probability of being wrong. Indeed, for this circumstance a “reverse” form of CJT implies that the group’s likelihood of being right (when following the majority) approaches zero as group size increases. A further constraint built into CJT is that voters must form their opinions independently of one another, where independence is not an easy condition to satisfy. There are different ways to define independence, however (Dietrich and Spiekermann 2013), and these details are not pursued here.

Even if these crucial—and rather restrictive –constraints are met, it still cannot be said that majority voting is at the top of the veritistic (truth-conducive) heap as compared with rival voting methods. Shapley and Grofman (1984) and Nitzan and Paroush (1982) proved that the optimal voting scheme from the perspective of maximizing the group’s (chance of) getting the truth is a weighted scheme. A maximally truth-conducive weighting scheme would assign to each voter a weight wi that is proportional to log(pi / (1−pi)), where pi is the probability that voter i gets the correct answer. To illustrate, suppose that a local weather bureau wants the best practice for predicting the weather, and can exploit the accuracy likelihoods of five independent experts, whose probabilities of correctly predicting rain versus non-rain is .90 for two of them and .60 for the other three. The optimal scheme for the bureau is not to give equal weight to all five forecasters, but instead to give weights of .392 to each of the two superior experts and weights of .072 to each of the three lesser experts. This weighting scheme gives the bureau a correctness probability of .927 as compared with a correctness probability of .877 for an equal weighting scheme. Since democracy is standardly associated with equal (i.e., unweighted) voting, democracy is not the best scheme from a purely epistemic standpoint. This does not necessarily show that democracy is an inferior political system, only that one might hesitate to make purely epistemic considerations the be-all and end-all of political desirability (a conception that Estlund 2008 dubs “epistocracy”).

Another perspective aimed at achieving the highest group competence emphasizes the value of a diverse set of voices or points of view (Hong and Page 2004; Sunstein 2006; Landemore 2011). Diversity expands the problem-solving approaches employed by the community and gathers a wider range of relevant evidence. Hong and Page (2004) produce evidence alleged to show that group success at problem-solving is less a function of its members’ abilities than their diversity of methods.

In political matters, group deliberation can be seen as occupying a stage prior to that of voting, a stage at which voters form their personal opinions by conversing or otherwise exchanging perspectives and arguments with other voters. This is thought by many theorists to be of fundamental importance to democracy.

However, there are two rather different conceptions of, or rationales for, the deliberative approach to democracy (Freeman 2000: 375–379). The first conception sees public deliberation as essential to the discovery of truth(s) about how best to promote the common good. In brief, it holds that deliberation is the best epistemic means to what is truly the common good—the presumed end of political association. The second form of deliberative democracy, associated with Rawls and his followers, is that deliberative democracy is what is required to legitimate political institutions. To be legitimate, political institutions should be justifiable to all on the basis of reasons that all can reasonably accept. Notice that “justification” and “reasonableness” are both epistemic notions, or at least can be understood in epistemic senses. So this justification for deliberative democracy may also be epistemic, although it is also easily understood in non-epistemic senses (e.g., as directed to collective planning).

Each of the two deliberative approaches to democracy faces serious challenges, albeit of different kinds. Starting with the truth-oriented approach, what is the proof, or even substantial evidence, that public deliberation is conducive to truth, i.e., accuracy of judgment? And does such truth-conduciveness hold for political discourse in particular? If we consider actual

Epistemology

Epistemology is the study of knowledge. Epistemologists concern themselves with a number of tasks, which we might sort into two categories.

First, we must determine the nature of knowledge; that is, what does it mean to say that someone knows, or fails to know, something? This is a matter of understanding what knowledge is, and how to distinguish between cases in which someone knows something and cases in which someone does not know something. While there is some general agreement about some aspects of this issue, we shall see that this question is much more difficult than one might imagine.

Second, we must determine the extent of human knowledge; that is, how much do we, or can we, know? How can we use our reason, our senses, the testimony of others, and other resources to acquire knowledge? Are there limits to what we can know? For instance, are some things unknowable? Is it possible that we do not know nearly as much as we think we do? Should we have a legitimate worry about skepticism, the view that we do not or cannot know anything at all?

While this article provides on overview of the important issues, it leaves the most basic questions unanswered; epistemology will continue to be an area of philosophical discussion as long as these questions remain.

Table of Contents

  1. Kinds of Knowledge
  2. The Nature of Propositional Knowledge
    1. Belief
    2. Truth
    3. Justification
    4. The Gettier Problem
      1. The No-False-Belief Condition
      2. The No-Defeaters Condition
      3. Causal Accounts of Knowledge
  3. The Nature of Justification
    1. Internalism
      1. Foundationalism
      2. Coherentism
    2. Externalism
  4. The Extent of Human Knowledge
    1. Sources of Knowledge
    2. Skepticism
    3. Cartesian Skepticism
    4. Humean Skepticism
      1. Numerical vs. Qualitative Identity
      2. Hume's Skepticism about Induction
  5. Conclusion
  6. References and Further Reading

1. Kinds of Knowledge

The term “epistemology” comes from the Greek "episteme," meaning "knowledge," and "logos," meaning, roughly, "study, or science, of." "Logos" is the root of all terms ending in "-ology" – such as psychology, anthropology – and of "logic," and has many other related meanings.

The word "knowledge" and its cognates are used in a variety of ways. One common use of the word "know" is as an expression of psychological conviction. For instance, we might hear someone say, "I just knew it wouldn't rain, but then it did." While this may be an appropriate usage, philosophers tend to use the word "know" in a factive sense, so that one cannot know something that is not the case. (This point is discussed at greater length in section 2b below.)

Even if we restrict ourselves to factive usages, there are still multiple senses of "knowledge," and so we need to distinguish between them. One kind of knowledge is procedural knowledge, sometimes called competence or "know-how;" for example, one can know how to ride a bicycle, or one can know how to drive from Washington, D.C. to New York. Another kind of knowledge is acquaintance knowledge or familiarity; for instance, one can know the department chairperson, or one can know Philadelphia.

Epistemologists typically do not focus on procedural or acquaintance knowledge, however, instead preferring to focus on propositional knowledge. A proposition is something which can be expressed by a declarative sentence, and which purports to describe a fact or a state of affairs, such as "Dogs are mammals," "2+2=7," "It is wrong to murder innocent people for fun." (Note that a proposition may be true or false; that is, it need not actually express a fact.) Propositional knowledge, then, can be called knowledge-that; statements of propositional knowledge (or the lack thereof) are properly expressed using "that"-clauses, such as "He knows that Houston is in Texas," or "She does not know that the square root of 81 is 9." In what follows, we will be concerned only with propositional knowledge.

Propositional knowledge, obviously, encompasses knowledge about a wide range of matters: scientific knowledge, geographical knowledge, mathematical knowledge, self-knowledge, and knowledge about any field of study whatever. Any truth might, in principle, be knowable, although there might be unknowable truths. One goal of epistemology is to determine the criteria for knowledge so that we can know what can or cannot be known, in other words, the study of epistemology fundamentally includes the study of meta-epistemology (what we can know about knowledge itself).

We can also distinguish between different types of propositional knowledge, based on the source of that knowledge. Non-empirical or a priori knowledge is possible independently of, or prior to, any experience, and requires only the use of reason; examples include knowledge of logical truths such as the law of non-contradiction, as well as knowledge of abstract claims (such as ethical claims or claims about various conceptual matters). Empirical or a posteriori knowledge is possible only subsequent, or posterior, to certain sense experiences (in addition to the use of reason); examples include knowledge of the color or shape of a physical object or knowledge of geographical locations. (Some philosophers, called rationalists, believe that all knowledge is ultimately grounded upon reason; others, called empiricists, believe that all knowledge is ultimately grounded upon experience.) A thorough epistemology should, of course, address all kinds of knowledge, although there might be different standards for a priori and a posteriori knowledge.

We can also distinguish between individual knowledge and collective knowledge. Social epistemology is the subfield of epistemology that addresses the way that groups, institutions, or other collective bodies might come to acquire knowledge.

2. The Nature of Propositional Knowledge

Having narrowed our focus to propositional knowledge, we must ask ourselves what, exactly, constitutes knowledge. What does it mean for someone to know something? What is the difference between someone who knows something and someone else who does not know it, or between something one knows and something one does not know? Since the scope of knowledge is so broad, we need a general characterization of knowledge, one which is applicable to any kind of proposition whatsoever. Epistemologists have usually undertaken this task by seeking a correct and complete analysis of the concept of knowledge, in other words a set of individually necessary and jointly sufficient conditions which determine whether someone knows something.

a. Belief

Let us begin with the observation that knowledge is a mental state; that is, knowledge exists in one's mind, and unthinking things cannot know anything. Further, knowledge is a specific kind of mental state. While "that"-clauses can also be used to describe desires and intentions, these cannot constitute knowledge. Rather, knowledge is a kind of belief. If one has no beliefs about a particular matter, one cannot have knowledge about it.

For instance, suppose that I desire that I be given a raise in salary, and that I intend to do whatever I can to earn one. Suppose further that I am doubtful as to whether I will indeed be given a raise, due to the intricacies of the university's budget and such. Given that I do not believe that I will be given a raise, I cannot be said to know that I will. Only if I am inclined to believe something can I come to know it. Similarly, thoughts that an individual has never entertained are not among his beliefs, and thus cannot be included in his body of knowledge.

Some beliefs, those which the individual is actively entertaining, are called occurrent beliefs. The majority of an individual's beliefs are non-occurrent; these are beliefs that the individual has in the background but is not entertaining at a particular time. Correspondingly, most of our knowledge is non-occurrent, or background, knowledge; only a small amount of one's knowledge is ever actively on one's mind.

b. Truth

Knowledge, then, requires belief. Of course, not all beliefs constitute knowledge. Belief is necessary but not sufficient for knowledge. We are all sometimes mistaken in what we believe; in other words, while some of our beliefs are true, others are false. As we try to acquire knowledge, then, we are trying to increase our stock of true beliefs (while simultaneously minimizing our false beliefs).

We might say that the most typical purpose of beliefs is to describe or capture the way things actually are; that is, when one forms a belief, one is seeking a match between one's mind and the world. (We sometimes, of course, form beliefs for other reasons – to create a positive attitude, to deceive ourselves, and so forth – but when we seek knowledge, we are trying to get things right.) And, alas, we sometimes fail to achieve such a match; some of our beliefs do not describe the way things actually are.

Note that we are assuming here that there is such a thing as objective truth, so that it is possible for beliefs to match or to fail to match with reality. That is, in order for someone to know something, there must be something one knows about. Recall that we are discussing knowledge in the factive sense; if there are no facts of the matter, then there's nothing to know (or to fail to know). This assumption is not universally accepted – in particular, it is not shared by some proponents of relativism – but it will not be defended here. However, we can say that truth is a condition of knowledge; that is, if a belief is not true, it cannot constitute knowledge. Accordingly, if there is no such thing as truth, then there can be no knowledge. Even if there is such a thing as truth, if there is a domain in which there are no truths, then there can be no knowledge within that domain. (For example, if beauty is in the eye of the beholder, then a belief that something is beautiful cannot be true or false, and thus cannot constitute knowledge.)

c. Justification

Knowledge, then, requires factual belief. However, this does not suffice to capture the nature of knowledge. Just as knowledge requires successfully achieving the objective of true belief, it also requires success with regard to the formation of that belief. In other words, not all true beliefs constitute knowledge; only true beliefs arrived at in the right way constitute knowledge.

What, then, is the right way of arriving at beliefs? In addition to truth, what other properties must a belief have in order to constitute knowledge? We might begin by noting that sound reasoning and solid evidence seem to be the way to acquire knowledge. By contrast, a lucky guess cannot constitute knowledge. Similarly, misinformation and faulty reasoning do not seem like a recipe for knowledge, even if they happen to lead to a true belief. A belief is said to be justified if it is obtained in the right way. While justification seems, at first glance, to be a matter of a belief's being based on evidence and reasoning rather than on luck or misinformation, we shall see that there is much disagreement regarding how to spell out the details.

The requirement that knowledge involve justification does not necessarily mean that knowledge requires absolute certainty, however. Humans are fallible beings, and fallibilism is the view that it is possible to have knowledge even when one's true belief might have turned out to be false. Between beliefs which were necessarily true and those which are true solely by luck lies a spectrum of beliefs with regard to which we had some defeasible reason to believe that they would be true. For instance, if I heard the weatherman say that there is a 90% chance of rain, and as a result I formed the belief that it would rain, then my true belief that it would rain was not true purely by luck. Even though there was some chance that my belief might have been false, there was a sufficient basis for that belief for it to constitute knowledge. This basis is referred to as the justification for that belief. We can then say that, to constitute knowledge, a belief must be both true and justified.

Note that because of luck, a belief can be unjustified yet true; and because of human fallibility, a belief can be justified yet false. In other words, truth and justification are two independent conditions of beliefs. The fact that a belief is true does not tell us whether or not it is justified; that depends on how the belief was arrived at. So, two people might hold the same true belief, but for different reasons, so that one of them is justified and the other is unjustified. Similarly, the fact that a belief is justified does not tell us whether it's true or false. Of course, a justified belief will presumably be more likely to be true than to be false, and justified beliefs will presumably be more likely or more probable to be true than unjustified beliefs. (As we will see in section 3 below, the exact nature of the relationship between truth and justification is contentious.)

d. The Gettier Problem

For some time, the justified true belief (JTB) account was widely agreed to capture the nature of knowledge. However, in 1963, Edmund Gettier published a short but widely influential article which has shaped much subsequent work in epistemology. Gettier provided two examples in which someone had a true and justified belief, but in which we seem to want to deny that the individual has knowledge, because luck still seems to play a role in his belief having turned out to be true.

Consider an example. Suppose that the clock on campus (which keeps accurate time and is well maintained) stopped working at 11:56pm last night, and has yet to be repaired. On my way to my noon class, exactly twelve hours later, I glance at the clock and form the belief that the time is 11:56. My belief is true, of course, since the time is indeed 11:56. And my belief is justified, as I have no reason to doubt that the clock is working, and I cannot be blamed for basing beliefs about the time on what the clock says. Nonetheless, it seems evident that I do not know that the time is 11:56. After all, if I had walked past the clock a bit earlier or a bit later, I would have ended up with a false belief rather than a true one.

This example and others like it, while perhaps somewhat far-fetched, seem to show that it is possible for justified true belief to fail to constitute knowledge. To put it another way, the justification condition was meant to ensure that knowledge was based on solid evidence rather than on luck or misinformation, but Gettier-type examples seem to show that justified true belief can still involve luck and thus fall short of knowledge. This problem is referred to as "the Gettier problem." To solve this problem, we must either show that all instances of justified true belief do indeed constitute knowledge, or alternatively refine our analysis of knowledge.

i. The No-False-Belief Condition

We might think that there is a simple and straightforward solution to the Gettier problem. Note that my reasoning was tacitly based on my belief that the clock is working properly, and that this belief is false. This seems to explain what has gone wrong in this example. Accordingly, we might revise our analysis of knowledge by insisting that to constitute knowledge, a belief must be true and justified and must be formed without relying on any false beliefs. In other words, we might say, justification, truth, and belief are all necessary for knowledge, but they are not jointly sufficient for knowledge; there is a fourth condition – namely, that no false beliefs be essentially involved in the reasoning that led to the belief – which is also necessary.

Unfortunately, this will not suffice; we can modify the example so that my belief is justified and true, and is not based on any false beliefs, but still falls short of knowledge. Suppose, for instance, that I do not have any beliefs about the clock's current state, but merely the more general belief that the clock usually is in working order. This belief, which is true, would suffice to justify my belief that the time is now 11:56; of course, it still seems evident that I do not know the time.

ii. The No-Defeaters Condition

However, the no-false-belief condition does not seem to be completely misguided; perhaps we can add some other condition to justification and truth to yield a correct characterization of knowledge. Note that, even if I didn't actively form the belief that the clock is currently working properly, it seems to be implicit in my reasoning, and the fact that it is false is surely relevant to the problem. After all, if I were asked, at the time that I looked at the clock, whether it is working properly, I would have said that it is. Conversely, if I believed that the clock wasn't working properly, I wouldn't be justified in forming a belief about the time based on what the clock says.

In other words, the proposition that the clock is working properly right now meets the following conditions: it is a false proposition, I do not realize that it is a false proposition, and if I had realized that it is a false proposition, my justification for my belief that it is 11:56 would have been undercut or defeated. If we call propositions such as this "defeaters," then we can say that to constitute knowledge, a belief must be true and justified, and there must not be any defeaters to the justification of that belief. Many epistemologists believe this analysis to be correct.

iii. Causal Accounts of Knowledge

Rather than modifying the JTB account of knowledge by adding a fourth condition, some epistemologists see the Gettier problem as reason to seek a substantially different alternative. We have noted that knowledge should not involve luck, and that Gettier-type examples are those in which luck plays some role in the formation of a justified true belief. In typical instances of knowledge, the factors responsible for the justification of a belief are also responsible for its truth. For example, when the clock is working properly, my belief is both true and justified because it's based on the clock, which accurately displays the time. But one feature that all Gettier-type examples have in common is the lack of a clear connection between the truth and the justification of the belief in question. For example, my belief that the time is 11:56 is justified because it's based on the clock, but it's true because I happened to walk by at just the right moment. So, we might insist that to constitute knowledge, a belief must be both true and justified, and its truth and justification must be connected somehow.

This notion of a connection between the truth and the justification of a belief turns out to be difficult to formulate precisely, but causal accounts of knowledge seek to capture the spirit of this proposal by more significantly altering the analysis of knowledge. Such accounts maintain that in order for someone to know a proposition, there must be a causal connection between his belief in that proposition and the fact that the proposition encapsulates. This retains the truth condition, since a proposition must be true in order for it to encapsulate a fact. However, it appears to be incompatible with fallibilism, since it does not allow for the possibility that a belief be justified yet false. (Strictly speaking, causal accounts of knowledge make no reference to justification, although we might attempt to reformulate fallibilism in somewhat modified terms in order to state this observation.)

While causal accounts of knowledge are no longer thought to be correct, they have engendered reliabilist theories of knowledge, which shall be discussed in section 3b below.

3. The Nature of Justification

One reason that the Gettier problem is so problematic is that neither Gettier nor anyone who preceded him has offered a sufficiently clear and accurate analysis of justification. We have said that justification is a matter of a belief's having been formed in the right way, but we have yet to say what that amounts to. We must now consider this matter more closely.

We have noted that the goal of our belief-forming practices is to obtain truth while avoiding error, and that justification is the feature of beliefs which are formed in such a way as to best pursue this goal. If we think, then, of the goal of our belief-forming practices as an attempt to establish a match between one's mind and the world, and if we also think of the application or withholding of the justification condition as an evaluation of whether this match was arrived at in the right way, then there seem to be two obvious approaches to construing justification: namely, in terms of the believer's mind, or in terms of the world.

a. Internalism

Belief is a mental state, and belief-formation is a mental process. Accordingly, one might reason, whether or not a belief is justified – whether, that is, it is formed in the right way – can be determined by examining the thought-processes of the believer during its formation. Such a view, which maintains that justification depends solely on factors internal to the believer's mind, is called internalism. (The term "internalism" has different meanings in other contexts; here, it will be used strictly to refer to this type of view about epistemic justification.)

According to internalism, the only factors that are relevant to the determination of whether a belief is justified are the believer's other mental states. After all, an internalist will argue, only an individual's mental states – her beliefs about the world, her sensory inputs (for example, her sense data) and her beliefs about the relations between her various beliefs – can determine what new beliefs she will form, so only an individual's mental states can determine whether any particular belief is justified. In particular, in order to be justified, a belief must be appropriately based upon or supported by other mental states.

This raises the question of what constitutes the basing or support relation between a belief and one's other mental states. We might want to say that, in order for belief A to be appropriately based on belief B (or beliefs B1 and B2, or B1, B2, and…Bn), the truth of B must suffice to establish the truth of A, in other words, B must entail A. (We shall consider the relationship between beliefs and sensory inputs below.) However, if we want to allow for our fallibility, we must instead say that the truth of B would give one good reason to believe that A is also true (by making it likely or probable that A is true). An elaboration of what counts as a good reason for belief, accordingly, is an essential part of any internalist account of justification.

However, there is an additional condition that we must add: belief B must itself be justified, since unjustified beliefs cannot confer justification on other beliefs. Because belief B be must also be justified, must there be some justified belief C upon which B is based? If so, C must itself be justified, and it may derive its justification from some further justified belief, D. This chain of beliefs deriving their justification from other beliefs may continue forever, leading us in an infinite regress. While the idea of an infinite regress might seem troubling, the primary ways of avoiding such a regress may have their own problems as well. This raises the "regress problem," which begins from observing that there are only four possibilities as to the structure of one's justified beliefs:

  1. The series of justified beliefs, each based upon the other, continues infinitely.
  2. The series of justified beliefs circles back to its beginning (A is based on B, B on C, C on D, and D on A).
  3. The series of justified beliefs begins with an unjustified belief.
  4. The series of justified beliefs begins with a belief which is justified, but not by virtue of being based on another justified belief.

These alternatives seem to exhaust the possibilities. That is, if one has any justified beliefs, one of these four possibilities must describe the relationships between those beliefs. As such, a complete internalist account of justification must decide among the four.

i. Foundationalism

Let us, then, consider each of the four possibilities mentioned above. Alternative 1 seems unacceptable because the human mind can contain only finitely many beliefs, and any thought-process that leads to the formation of a new belief must have some starting point. Alternative 2 seems no better, since circular reasoning appears to be fallacious. And alternative 3 has already been ruled out, since it renders the second belief in the series (and, thus, all subsequent beliefs) unjustified. That leaves alternative 4, which must, by process of elimination, be correct.

This line of reasoning, which is typically known as the regress argument, leads to the conclusion that there are two different kinds of justified beliefs: those which begin a series of justified beliefs, and those which are based on other justified beliefs. The former, called basic beliefs, are able to confer justification on other, non-basic beliefs, without themselves having their justification conferred upon them by other beliefs. As such, there is an asymmetrical relationship between basic and non-basic beliefs. Such a view of the structure of justified belief is known as "foundationalism." In general, foundationalism entails that there is an asymmetrical relationship between any two beliefs: if A is based on B, then B cannot be based on A.

Accordingly, it follows that at least some beliefs (namely basic beliefs) are justified in some way other than by way of a relation to other beliefs. Basic beliefs must be self-justified, or must derive their justification from some non-doxastic source such as sensory inputs; the exact source of the justification of basic beliefs needs to be explained by any complete foundationalist account of justification.

ii. Coherentism

Internalists might be dissatisfied with foundationalism, since it allows for the possibility of beliefs that are justified without being based upon other beliefs. Since it was our solution to the regress problem that led us to foundationalism, and since none of the alternatives seem palatable, we might look for a flaw in the problem itself. Note that the problem is based on a pivotal but hitherto unstated assumption: namely, that justification is linear in fashion. That is, the statement of the regress problem assumes that the basing relation parallels a logical argument, with one belief being based on one or more other beliefs in an asymmetrical fashion.

So, an internalist who finds foundationalism to be problematic might deny this assumption, maintaining instead that justification is the result of a holistic relationship among beliefs. That is, one might maintain that beliefs derive their justification by inclusion in a set of beliefs which cohere with one another as a whole; a proponent of such a view is called a coherentist.

A coherentist, then, sees justification as a relation of mutual support among many beliefs, rather than a series of asymmetrical beliefs. A belief derives its justification, according to coherentism, not by being based on one or more other beliefs, but by virtue of its membership in a set of beliefs that all fit together in the right way. (The coherentist needs to specify what constitutes coherence, of course. It must be something more than logical consistency, since two unrelated beliefs may be consistent. Rather, there must be some positive support relationship – for instance, some sort of explanatory relationship – between the members of a coherent set in order for the beliefs to be individually justified.)

Coherentism is vulnerable to the "isolation objection". It seems possible for a set of beliefs to be coherent, but for all of those beliefs to be isolated from reality. Consider, for instance, a work of fiction. All of the statements in the work of fiction might form a coherent set, but presumably believing all and only the statements in a work of fiction will not render one justified. Indeed, any form of internalism seems vulnerable to this objection, and thus a complete internalist account of justification must address it. Recall that justification requires a match between one's mind and the world, and an inordinate emphasis on the relations between the beliefs in one's mind seems to ignore the question of whether those beliefs match up with the way things actually are.

b. Externalism

Accordingly, one might think that focusing solely on factors internal to the believer's mind will inevitably lead to a mistaken account of justification. The alternative, then, is that at least some factors external to the believer's mind determine whether or not she is justified. A proponent of such a view is called an externalist.

According to externalism, the only way to avoid the isolation objection and ensure that knowledge does not include luck is to consider some factors other than the individual's other beliefs. Which factors, then, should be considered? The most prominent version of externalism, called reliabilism, suggests that we consider the source of a belief. Beliefs can be formed as a result of many different sources, such as sense experience, reason, testimony, memory. More precisely, we might specify which sense was used, who provided the testimony, what sort of reasoning is used, or how recent the relevant memory is. For every belief, we can indicate the cognitive process that led to its formation. In its simplest and most straightforward form, reliabilism maintains that whether or not a belief is justified depends upon whether that process is a reliable source of true beliefs. Since we are seeking a match between our mind and the world, justified beliefs are those which result from processes which regularly achieve such a match. So, for example, using vision to determine the color of an object which is well-lit and relatively near is a reliable belief-forming process for a person with normal vision, but not for a color-blind person. Forming beliefs on the basis of the testimony of an expert is likely to yield true beliefs, but forming beliefs on the basis of the testimony of compulsive liars is not. In general, if a belief is the result of a cognitive process which reliably (most of the time – we still want to leave room for human fallibility) leads to true beliefs, then that belief is justified.

The foregoing suggests one immediate challenge for reliabilism. The formation of a belief is a one-time event, but the reliability of the process depends upon the long-term performance of that process. (This can include counterfactual as well as actual events. For instance, a coin which is flipped only once and lands on heads nonetheless has a 50% chance of landing on tails, even though its actual performance has yielded heads 100% of the time.) And this requires that we specify which process is being used, so that we can evaluate its performance in other instances. However, cognitive processes can be described in more or less general terms: for example, the same belief-forming process might be variously described as sense experience, vision, vision by a normally-sighted person, vision by a normally-sighted person in daylight, vision by a normally-sighted person in daylight while looking at a tree, vision by a normally-sighted person in daylight while looking at an elm tree, and so forth. The "generality problem" notes that some of these descriptions might specify a reliable process but others might specify an unreliable process, so that we cannot know whether a belief is justified or unjustified unless we know the appropriate level of generality to use in describing the process.

Even if the generality problem can be solved, another problem remains for externalism. Keith Lehrer presents this problem by way of his example of Mr. Truetemp. Truetemp has, unbeknownst to him, had a tempucomp – a device which accurately reads the temperature and causes a spontaneous belief about that temperature – implanted in his brain. As a result, he has many true beliefs about the temperature, but he does not know why he has them or what their source is. Lehrer argues that, although Truetemp's belief-forming process is reliable, his ignorance of the tempucomp renders his temperature-beliefs unjustified, and thus that a reliable cognitive process cannot yield justification unless the believer is aware of the fact that the process is reliable. In other words, the mere fact that the process is reliable does not suffice, Lehrer concludes, to justify any beliefs which are formed via that process.

4. The Extent of Human Knowledge

a. Sources of Knowledge

Given the above characterization of knowledge, there are many ways that one might come to know something. Knowledge of empirical facts about the physical world will necessarily involve perception, in other words, the use of the senses. Science, with its collection of data and conducting of experiments, is the paradigm of empirical knowledge. However, much of our more mundane knowledge comes from the senses, as we look, listen, smell, touch, and taste the various objects in our environments.

But all knowledge requires some amount of reasoning. Data collected by scientists must be analyzed before knowledge is yielded, and we draw inferences based on what our senses tell us. And knowledge of abstract or non-empirical facts will exclusively rely upon reasoning. In particular, intuition is often believed to be a sort of direct access to knowledge of the a priori.

Once knowledge is obtained, it can be sustained and passed on to others. Memory allows us to know something that we knew in the past, even, perhaps, if we no longer remember the original justification. Knowledge can also be transmitted from one individual to another via testimony; that is, my justification for a particular belief could amount to the fact that some trusted source has told me that it is true.

b. Skepticism

In addition to the nature of knowledge, epistemologists concern themselves with the question of the extent of human knowledge: how much do we, or can we, know? Whatever turns out to be the correct account of the nature of knowledge, there remains the matter of whether we actually have any knowledge. It has been suggested that we do not, or cannot, know anything, or at least that we do not know as much as we think we do. Such a view is called skepticism.

We can distinguish between a number of different varieties of skepticism. First, one might be a skeptic only with regard to certain domains, such as mathematics, morality, or the external world (this is the most well-known variety of skepticism). Such a skeptic is a local skeptic, as contrasted with a global skeptic, who maintains that we cannot know anything at all. Also, since knowledge requires that our beliefs be both true and justified, a skeptic might maintain that none of our beliefs are true or that none of them are justified (the latter is much more common than the former).

While it is quite easy to challenge any claim to knowledge by glibly asking, "How do you know?", this does not suffice to show that skepticism is an important position. Like any philosophical stance, skepticism must be supported by an argument. Many arguments have been offered in defense of skepticism, and many responses to those arguments have been offered in return. Here, we shall consider two of the most prominent arguments in support of skepticism about the external world.

c. Cartesian Skepticism

In the first of his Meditations, René Descartes offers an argument in support of skepticism, which he then attempts to refute in the later Meditations. The argument notes that some of our perceptions are inaccurate. Our senses can trick us; we sometimes mistake a dream for a waking experience, and it is possible that an evil demon is systematically deceiving us. (The modern version of the evil demon scenario is that you are a brain-in-a-vat, because scientists have removed your brain from your skull, connected it to a sophisticated computer, and immersed it in a vat of preservative fluid. The computer produces what seem to be genuine sense experiences, and also responds to your brain's output to make it seem that you are able to move about in your environment as you did when your brain was still in your body. While this scenario may seem far-fetched, we must admit that it is at least possible.)

As a result, some of our beliefs will be false. In order to be justified in believing what we do, we must have some way to distinguish between those beliefs which are true (or, at least, are likely to be true) and those which are not. But just as there are no signs that will allow us to distinguish between waking and dreaming, there are no signs that will allow us to distinguish between beliefs that are accurate and beliefs which are the result of the machinations of an evil demon. This indistinguishability between trustworthy and untrustworthy belief, the argument goes, renders all of our beliefs unjustified, and thus we cannot know anything. A satisfactory response to this argument, then, must show either that we are indeed able to distinguish between true and false beliefs, or that we need not be able to make such a distinction.

d. Humean Skepticism

According to the indistinguishability skeptic, my senses can tell me how things appear, but not how they actually are. We need to use reason to construct an argument that leads us from beliefs about how things appear to (justified) beliefs about how they are. But even if we are able to trust our perceptions, so that we know that they are accurate, David Hume argues that the specter of skepticism remains. Note that we only perceive a very small part of the universe at any given moment, although we think that we have knowledge of the world beyond that which we are currently perceiving. It follows, then, that the senses alone cannot account for this knowledge, and that reason must supplement the senses in some way in order to account for any such knowledge. However, Hume argues, reason is incapable of providing justification for any belief about the external world beyond the scope of our current sense perceptions. Let us consider two such possible arguments and Hume's critique of them.

i. Numerical vs. Qualitative Identity

We typically believe that the external world is, for the most part, stable. For instance, I believe that my car is parked where I left it this morning, even though I am not currently looking at it. If I were to go peek out the window right now and see my car, I might form the belief that my car has been in the same space all day. What is the basis for this belief? If asked to make my reasoning explicit, I might proceed as follows:

I have had two sense-experiences of my car: one this morning and one just now.
The two sense-experiences were (more or less) identical.
Therefore, it is likely that the objects that caused them are identical.
Therefore, a single object – my car – has been in that parking space all day.

Similar reasoning would undergird all of our beliefs about the persistence of the external world and all of the objects we perceive. But are these beliefs justified? Hume thinks not, since the above argument (and all arguments like it) contains an equivocation. In particular, the first occurrence of "identical" refers to qualitative identity. The two sense-experiences are not one and the same, but are distinct; when we say that they are identical we mean that one is similar to the other in all of its qualities or properties. But the second occurrence of "identical" refers to numerical identity. When we say that the objects that caused the two sense-experiences are identical, we mean that there is one object, rather than two, that is responsible for both of them. This equivocation, Hume argues, renders the argument fallacious; accordingly, we need another argument to support our belief that objects persist even when we are not observing them.

ii. Hume's Skepticism about Induction

Suppose that a satisfactory argument could be found in support of our beliefs in the persistence of physical objects. This would provide us with knowledge that the objects that we have observed have persisted even when we were not observing them. But in addition to believing that these objects have persisted up until now, we believe that they will persist in the future; we also believe that objects we have never observed similarly have persisted and will persist. In other words, we expect the future to be roughly like the past, and the parts of the universe that we have not observed to be roughly like the parts that we have observed. For example, I believe that my car will persist into the future. What is the basis for this belief? If asked to make my reasoning explicit, I might proceed as follows:

My car has always persisted in the past.
Nature is roughly uniform across time and space (and thus the future will be roughly like the past).
Therefore, my car will persist in the future.

Similar reasoning would undergird all of our beliefs about the future and about the unobserved. Are such beliefs justified? Again, Hume thinks not, since the above argument, and all arguments like it, contain an unsupported premise, namely the second premise, which might be called the Principle of the Uniformity of Nature (PUN). Why should we believe this principle to be true? Hume insists that we provide some reason in support of this belief. Because the above argument is an inductive rather than a deductive argument, the problem of showing that it is a good argument is typically referred to as the "problem of induction." We might think that there is a simple and straightforward solution to the problem of induction, and that we can indeed provide support for our belief that PUN is true. Such an argument would proceed as follows:

PUN has always been true in the past.
Nature is roughly uniform across time and space (and thus the future will be roughly like the past).
Therefore, PUN will be true in the future.

This argument, however, is circular; its second premise is PUN itself! Accordingly, we need another argument to support our belief that PUN is true, and thus to justify our inductive arguments about the future and the unobserved.

5. Conclusion

The study of knowledge is one of the most fundamental aspects of philosophical inquiry. Any claim to knowledge must be evaluated to determine whether or not it indeed constitutes knowledge. Such an evaluation essentially requires an understanding of what knowledge is and how much knowledge is possible. While this article provides on overview of the important issues, it leaves the most basic questions unanswered; epistemology will continue to be an area of philosophical discussion as long as these questions remain.

6. References and Further Reading

  • Alston, William P., 1989. Epistemic Justification: Essays in the Theory of Knowledge. Ithaca, NY: Cornell University Press.
  • Armstrong, David, 1973. Belief, Truth, and Knowledge. Cambridge: Cambridge University Press.
    • A defense of reliabilism.
  • BonJour, Laurence, 1985. The Structure of Empirical Knowledge. Cambridge, MA: Harvard University Press.
    • A defense of coherentism.
  • Chisholm, Roderick, 1966. Theory of Knowledge, Englewood Cliffs, NJ: Prentice-Hall.
  • Chisholm, Roderick, 1977. Theory of Knowledge, 2nd edition. Englewood Cliffs, NJ: Prentice-Hall.
  • Chisholm, Roderick, 1989. Theory of Knowledge, 3rd edition. Englewood Cliffs, NJ: Prentice-Hall.
    • Chisholm was one of the first authors to provide a systematic analysis of knowledge. His account of justification is foundationalist.
  • Descartes, Rene, 1641. Meditations on First Philosophy. Reprinted in The Philosophical Writings of Descartes (3 volumes). Cottingham, Stoothoff and Murdoch, trans. Cambridge: Cambridge University Press.
    • Descartes presents an infallibilist version of foundationalism, and attempts to refute skepticism.
  • Dancy, Jonathan and Ernest Sosa (eds.), 1993. A Companion to Epistemology. Oxford: Blackwell.
  • DeRose, Keith, 1995. "Solving the Skeptical Problem” Philosophical Review, 104, pp. 1-52.
  • DeRose Keith and Ted Warfield (eds.), 1999. Skepticism: A Contemporary Reader. Oxford: Oxford University Press.
  • Feldman, Richard and Earl Conee, 1985. "Evidentialism." Philosophical Studies, 48, pp. 15-34.
    • The authors present and defend an (internalist) account of justification according to which a belief is justified or unjustified in virtue of the believer's evidence.
  • Gettier, Edmund, 1963. "Is Justified True Belief Knowledge?" Analysis, 23, pp. 121-123.
    • In which the Gettier problem is introduced.
  • Goldman, Alvin, 1976. "A Causal Theory of Knowing." Journal of Philosophy, 64, pp. 357-372.
  • Goldman, Alvin, 1986. Epistemology and Cognition. Cambridge: Harvard University Press.
    • Perhaps the most important defense of reliabilism.
  • Haack, Susan, 1991. "A Foundherentist Theory of Empirical Justification," In Theory of Knowledge: Classical and Contemporary Sources (3rd ed.), Pojman, Louis (ed.), Belmont, CA: Wadsworth.
    • An attempt to combine coherentism and foundationalism into an internalist account of justification which is superior to either of the two.
  • Hume, David, 1739. A Treatise on Human Nature. Oxford: Oxford University Press.
  • Hume, David, 1751. An Enquiry Concerning Human Understanding. Indianapolis: Hackett.
  • Lehrer, Keith, 2000. Theory of Knowledge (2nd ed.). Boulder, CO: Westview.
    • A defense of coherentism. This is also where we find the Truetemp example.
  • Lehrer, Keith and Stewart Cohen, 1983. "Justification, Truth, and Coherence." Synthese, 55, pp. 191-207.
  • Lewis, David, 1996. "Elusive Knowledge” Australasian Journal of Philosophy, 74, pp. 549-567.
  • Locke, John, 1689. An Essay Concerning Human Understanding. Oxford: Clarendon.
  • Plato, Meno and Theaetetus. In Complete Works. J. Cooper, ed. Indianapolis: Hackett.
    • Plato presents and defends a version of the JTB analysis of knowledge.
  • Pollock, John and Joseph Cruz, 1999. Contemporary Theories of Knowledge (2nd ed.). Lanham, MD: Rowman and Littlefield.
    • A defense of non-doxastic foundationalism, in which the basic states are percepts rather than beliefs.
  • Russell, Bertrand, 1912. Problems of Philosophy.
    • Russell presents a Gettier-type example, which was largely overlooked for many years.

Author Information

David A. Truncellito
Email: truncell@aya.yale.edu
U. S. A.

One thought on “Relying On Others An Essay In Epistemology Historical Developments

Leave a Reply

Your email address will not be published. Required fields are marked *