General Thoughts on Epistemology IV: The Difficulty of “Faith”

A man praying at a Japanese Shintō shrine.

First, I don’t want to step on anybody’s beliefs, but, well, here we go. -Brian Regan

Given an understanding of the world based on the scientific method—what I call the “wager” process—there are some tricky things that happen when we try to reconcile the concept of “faith.” In what follows, I will explain why the term is unintelligible unless it is used to describe a situation in which one believes something while ignoring evidence to the contrary.

To reiterate what I have explained in previous entries, the “wager” theory claims that coming to a belief always involves a kind of wager. When a wager is made in gambling, risk is adopted that the wager will fail. The wager is then confirmed or denied when the cards (or what have you) are revealed. In the same way, holding a belief adopts the risk that it will be disproven, and it is confirmed or denied by physical or logical tests.

The Oxford Dictionary defines “faith” as “complete trust or confidence in someone or something.”  The Stanford Encyclopedia of Philosophy breaks it into three broad categories: affectivecognitive, and volitional. The affective component refers to the psychological state or attitude often denoted in the phrase “losing one’s faith.” The cognitive component gets at the epistemic nature of the concept, suggesting that it is an actual cognitive faculty by which we can come to “know” things. It is perhaps most accurately categorized as a kind of belief. The volitional component adds the notion that one can choose whether or not to have “faith”—to accept this special kind of belief. Most of what I will address here are the linguistic aspects of the cognitive component.

There is a basic definition given by the Hebrew bible that has a number of translations. The English Standard version is as follows: “Now faith is the assurance of things hoped for, the conviction of things not seen.” (Hebrews 11.1) Within every translation, two recurring elements of “faith” that appear to be essential components are “hope” and a sense of certainty that goes beyond mere belief. First, combining “hope” with “certainty” does not appear to give us any semantic tool that we don’t already have or refer to some mental state that we cannot pinpoint otherwise. Moreover, the two concepts appear to be at odds. Saying that I hope x will occur usually requires that I am uncertain that x will occur. It is often a conscious hedging away from certainty. As aspects of “faith,” they also run into issues on their own.

“Hope” is primarily a stronger form of “desire”. When we hope for something to be the case, we desire for it to happen. If “faith” is another type of desire—a stronger version of “hope” perhaps—then it wouldn’t serve any epistemic purpose whatsoever. My desire for something to happen or be true has nothing at all to do with attempting to discover the actual outcome.

On the other hand, “certainty” does imply an epistemic relation. Use seems to vacillate between having absolute conviction and having a mere strong conviction. “Faith” does appear to have an absolute quality about it, since having only a strong conviction doesn’t quite cut it as a substitute for having faith in most circles. If “faith” is more of an absolute conviction, however, then it begs a series of questions: By what standard is it absolute? And is it ever logically permissible to have such a conviction? And if/when it is permissible, is “faith” a reasonable vehicle to get there?

I haven’t yet seen an acceptable set of answers that suggests “faith” is a helpful and coherent tool in epistemology. Even if one can muster up an adequate description and justification of “absolute” conviction, we are still left with showing how “faith” works and making the case that it is something reasonable to use.

The problem is that there really is no good explanation for how it works. “Faith” is considered quite mysterious, intentionally so. And because reasonable use is contingent upon there being at least some tenable understanding of its function, the chances that faith and reason coincide are slim at best.

There are some attempts to do this, however. One way would be to identify faith as a component of every belief and that it somehow accounts for the gap between our natural state of ignorance and the actions we take. According to this model, every action we take requires an extra push to get us there from our beliefs. But “faith” doesn’t appear to provide any explanatory power here. And once we add the parts we need to describe the basic epistemic process—placing proposition-based wagers and using actions and “intuition pumps” as tests to confirm or deny them—it becomes unnecessary to add any more components.

We could attempt to identify the concept as the same exact thing as making a wager, but for “faith” proponents, this would probably be undesirable, for the element of certainty would be lost in this case. A bet (much like scientific theory) is something that holds strongly to the assumption that its failure is within the realm of possibility. Unless I am mistaken, it is clearly the opposite for matters of faith.

Another attempt would be to note that “faith” is more applicable when deciding whether or not to trust another human being, the thought being that we are so complex and unpredictable, that the only viable option is to take some “leap.” If so, then it would imply that faith and rationality are quite separate, since the assumption is that we cannot predict what will happen with reason.

However, the “wager” theory still acts as an intelligible substitute for this even if we cannot reasonably “predict” what will happen. One can effectively bet on what someone will do and hope that they do it. Nothing more is needed. To use “faith” as a replacement for these two operations still seems to fly in the face of the  certainty requirement. To tack it on as an extra piece of the puzzle seems to either be unintelligible or create a contradiction (being certain and uncertain simultaneously).

There is still one definition for “faith” that fits neatly into this narrative, and that is “to believe a proposition despite evident contradictions.” In this case, “faith” would not be something that is part of the proper epistemic process. It would describe a decisively irrational function by which one may believe certain propositions to be true while ignoring conflicting test results for reasons external to the epistemic process. Although employing “faith” could not be rational on this account, it may still have some positive psychological effects for various people. And it may even bring more happiness to those lives than “proper epistemology” would. Even so, I would not subscribe to it. But that’s just me.

General Thoughts on Epistemology III: We Are All Cosmic Gamblers

English: The French Gambling Aristocracy

English: The French Gambling Aristocracy (Photo credit: Wikipedia)

As a preface to this post, it is crucial for me to communicate that I am setting aside the discussion about “knowledge” as my primary pursuit. The reason, as I explained in part II, is that the use of the word has become muddled. My position is that philosophical skepticism about “knowledge” of the external (physical) world attempts to solve a presently unsolvable problem, and therefore, “knowledge” may not be the best term to use when attempting to describe how we gather and store data in our brains. Instead, I will focus on the interaction between human beings and the external world on its own terms. Hopefully what follows is indicative of this.

A while ago I wrote about Pascal’s Wager, contending that he removes the essential components from what it takes to justify a belief. His argument was that, when looking at belief in God in a cost-benefit analysis, it is more beneficial to choose to believe in God. I reject his presumption that a person can rationally choose to believe something exists while disregarding all evidence for or against its existence. However, his idea that it is useful to equate believing and wagering is worthy of consideration. My suggestion is that the process of making a wager is the best model to describe what is going on when we are deciding that something is the case. When we claim to have a belief, faith, trust, or even knowledge, what we ultimately have is a form of bet. This does not answer the question of how we ought to come to a belief, but my inclination is to say that this theory will help explain what these kinds of claims are, fundamentally.

There are a few “armchairish” observations sometimes taken for granted that hint at my suggestion being a good one: Assuming that humans all function the same in the following ways, 1) we use observations and cognitive processes to form beliefs; 2) we take action based on our beliefs; 3) sometimes mistakes, ambiguity, and/or external factors outside of our control cause us to be incorrect in/about our beliefs; and 4) replacing the word “belief” with “wager” in (1)-(3) results in a fairly coherent progression of thought.

Put simply, when we decide that P is true given the data we have, we are simultaneously placing a wager on P. In other words, we do not only think it is true now, but we are betting it will continue to be true as time goes on. Then, as we go about life taking action according to the wager, the proposition is tested by observations xy, z, etc. If these observations appear to connect logically and/or causally with P (if everything appears consistent), then we confirm it to some degree, and the wager is not changed. The risk adopted by the bet is that the proposition may be proven to be incorrect, and there may be some undesirable consequence as a result of the action(s) based upon the proposition.

This process is related to Jonathan Haidt’s psychological theory involving processes of the mind. He came to the conclusion that people have an initial reaction to some stimulus that consists of a snap-judgement. Then, what follows are a series of rational thoughts that he says “supports” the initial judgment. I agree with this general theory, but if we also add that it is possible for the rational thought to deny the initial judgment, the theory has an even wider application.

The nature of the game is that whenever we consider some question, we have an open field of possible truths that is narrowed whenever we rule things out based upon testing, observation, and logic. This thought isn’t new. It is reminiscent of the scientific method, which dates back to the Renaissance and early Colonial Era when it began to take shape. Most of the modern iterations of the scientific method assume that any theory is open and liable to change, not only because of more efficient and useful language, but also because the pool of data changes over time. A “wager,” I suppose, could be most analogous to a “hypothesis.” But hypotheses are more consciously contrived, and my goal for the “wager” is to be broad enough to refer to unconscious behavior, in addition to all matter of predictions, theories, and conjectures.

Note that merely “placing a bet” says nothing of my conviction in the outcome, the quality or quantity of information that is taken into account, the level of consciousness with which I make the bet, or the time-frame of my test. I may be unconsciously believing something ridiculous, or I could be making a detailed evaluation of a claim’s plausibility. Both would involve a wager of some form—a decision to hold some proposition to be true or at least to act as if it were true.

One might also point out that there are important distinctions between different kinds of wagers, namely temporal ones. There are wagers about what will happen (predictions), about what has already occurred (beliefs), and about that which is ongoing. Suppose my childhood friend and I see a squirrel darting through street, and he says, “I bet that squirrel will get hit by a car!” Our inclination would be to label this as a prediction, since the event has not yet come to pass. In continuing the story, suppose I were to respond, “I do not believe that squirrel will be hit by a car.” It seems odd at first that I would use the word “belief” for what should be another prediction.

The reason someone might make such a mistake, I think, hints at a deeper underlying theory like the one I have proposed. If both beliefs and predictions are forms of a wager, then there is an inherent predictive aspect in both of these terms. The substitution of “belief” for “prediction” is made because the character of “belief” has an inherent predictive quality about it, which makes it easily mistaken for a prediction. However, it still does not have the same set of qualities that “prediction” has. There is still room for distinction.

And the distinction is this: A prediction is a consideration of what has not yet come to pass; a belief considers what is ongoing or has passed. In my theory, they both still fall into the “wager” category. I can still comfortably replace both “I believe” and “I predict” with “I bet.”

The immediate worry that should arise is that it seems like I would be committed to saying that predictions, wagers, and beliefs are all the same thing. In response, I would say my claim is that wagers are broad enough to encompass all of these kinds of terms, and there is a predictive element to a wager. However, that element refers to a “first-order” qualification that beliefs, predictions, propositions, and the like contain in common an expectation of continued confirmation. And confirmation can only occur at points in time after a wager is made, regardless of its kind. Note that because people are continuously acting or not acting, each action must carry with it an implicit set of bets. Since this is always the case, it doesn’t seem that “beliefs” can be abstracted from “wagers.”

Secondary qualifications would make up the distinctions between the different kinds of wagers. For instance, expanding upon the distinction I made earlier: the content of a belief must be considering an ongoing or past phenomenon; the content of a prediction considers a future phenomenon.

To conclude, when we form beliefs and predictions, we are making bets on what we think is accurate. The discussion about whether we can verify knowledge globally is a bunch of bunk. It is pretty clear that sometimes we make mistakes, but our goal is to seek truth regardless. And ultimately, we all play the game. We are all cosmic gamblers.

General Thoughts on Epistemology II: Global Philosophical Skepticism

nihilism

nihilism (Photo credit: stungeye)

The various forms of philosophical (p.) skepticism in the study of epistemology seek to question, deny, or limit the categories of what is possible to know. While it is useful to point out that we don’t have the data to confirm or deny some claims, many skeptics go so far that they would be forced to live a lifestyle that is inconsistent with their beliefs. I argue that questioning the “existence” of knowledge as a whole probably serves no purpose, since either answer will have no bearing on the everyday criteria for making decisions and taking action.

General p. skepticism questions the prospect of “knowledge.” There are several schools of thought that can be found on the family tree of p. skepticism. The extreme form is called epistemological nihilism, in which all “knowledge” is denied. There is also epistemological solipsism, a theory stating that “knowledge” about the external world is impossible, but “knowledge” about the agent’s mind is possible. ‘Global’ p. skeptics claim that they hold no “absolute knowledge,” while ‘local’ p. skeptics question specific types of knowledge. My target is the multifarious forms of global (g.) p. skepticism including nihilism and solipsism, and there are three main issues with these positions that I will touch upon.

First, this debate has been prolonged by a failure to give proper attention to semantics. The quotations around the word “knowledge” in my explanation are present because I have difficulty pinpointing exactly to what g.p. skeptics are referring when they use the term. It wouldn’t be just to completely fault the deceased thinkers who would have benefitted from access to modern advances in neuroscience. Nevertheless, “knowledge” remains ambiguous even without modern scientific perspectives.

It could be that “knowledge” really is “Justified True Belief.” Or it could be some abstract thing we achieve when we fully “understand” such and such. Perhaps it could be as simple as a subconscious observation that we sense in our environment. Ever walk into a room with, say, blue wallpaper, but hardly pay it any mind? How conscious does one have to be in order to know that the wallpaper is blue?

A g.p. skeptic might respond, “Exactly. The point is that we have no clear idea of what knowledge is.”

But they forget that human beings are the authors of language. We get to decide exactly to what “knowledge” refers. Any label without a referent is a floating abstraction. The question should not be, “What is knowledge?”, but rather, “How should ‘knowledge’ be defined?” The question should not be “Does knowledge exist?”, but rather, “Is ‘knowledge’ a useful term, given its definition?”

The second issue is that some skeptics must act inconsistently with their beliefs in order to interact with their environment. Nihilists believe that there can be no “knowledge” about the external world, and therefore, it cannot be verified. Yet, if they want to do anything, they must act upon information they receive through their senses. If they believe nothing they receive should be characterized as “knowledge,” then fine. The discussion becomes semantic. Otherwise, unless the nihilist sits quietly until death (or is deaf, dumb, blind, etc.), then the beliefs he/she holds will be violated.

As justification, I propose this thought experiment: Try to think of just one instance in which a normal, conscious human being can act physically without being aware of something in the physical realm.

Finally, there is an almost comical response to g.p. skeptics from a philosopher named G.E. Moore. He essentially says (not an actual quote), “Look. Here is my hand. I am perceiving a human hand right now. That is a fact that I ‘know.’ If you do not think I know it, then let’s say that I am acquiring sense-data. One sense-datum that I perceive is the surface of my hand.”

Although he employs an intriguing way of engaging g.p. skepticism, if we take the analysis one step further, we arrive at an important point. It is that perception of the so-called “external world” is the default state of human beings. There are all these lights and sounds and feels going on that we report on and verify through language. “Stuff” happens with such frequency that there is not really any reason to deny knowledge of the external world, especially on the basis that we could be wrong about individual bits of knowledge. (Proving something false logically necessitates the truth of the falsehood). The burden of proof is on the denier of clear and obvious evidence that literally surrounds us at all times.

To reiterate, challenging “knowledge” as a useful term is not a poor challenge, but questioning the concept while presupposing an ambiguous meaning is problematic. Going too far with g.p. skepticism will result in an inconsistent lifestyle. Acting in accordance with one’s sense-data regularly, while globally denying that very same data is an incoherent position. Therefore, the only option by default is to accept the external world as a given. Even settling for an agnostic position, abstaining from belief in the external world, may conflict somewhat with taking action. As I like to say, half-seriously, in all its Objectivist glory, “Reality doesn’t care about your nuanced opinion.”

Realists and Idealists

Realists VS Idealists

Realists VS Idealists (Photo credit: Emilie Ogez)

At some point in time you have probably been taught that there is a difference between a realist and an idealist. There are two chief understandings of this perceived dichotomy. The common usage is a description of human behavior, often seen an explanation for political decisions. This is dependent upon the second philosophical usage, however, which largely denotes an epistemic stance. My argument is that the philosophical version is a false dichotomy, and as a result, the common version is not a very useful mental construct.

Usually idealists are understood to take action based upon what they want to see as an ideal theoretical end. Sometimes we call people “idealists” when we observe that they think big without adequately taking into account the steps needed to achieve their goals. They are dreamers and eternal optimists.

Realists, on the other hand, are thought to be more pragmatic in their approach. They tend to be more pessimistic about the world and what can be accomplished, but they are coincidentally (if not causally) more often correct and may even live longer.  Realists supposedly see the world as it is, and they act more pragmatically without looking outside of their personal sphere to accomplish lofty, theoretical goals.

In a philosophical context, the respective meanings are different but related. In one sense, realism and idealism can be understood as metaphysical interpretations that may apply to any field of philosophy. Within every interpretation is a claim about the existence of something and to what degree it exists independent of our knowledge. Therefore the discussion is rooted in the most primary forms of philosophy: metaphysics and epistemology.

Generic Realism goes something like this: “ab, and c and so on exist, and the fact that they exist and have properties such as F-nessG-ness, and H-ness is (apart from mundane empirical dependencies of the sort sometimes encountered in everyday life) independent of anyone’s beliefs, linguistic practices, conceptual schemes, and so on.” So a theory of epistemological realism might make the claim that all things we know are generically real. This theory would be a subcategory of objectivism.

The theories in opposition to epistemological realism, labeled non-realist, are numerous. But the most widely referenced is—you guessed it—epistemological idealism. Plato was one of the first epistemic idealists, with his cave analogy and his famous theory of the forms. His key belief was that knowledge consists of “memories” that your “soul” recalls from its time in the underworld hanging out with the forms (which are supposedly perfect versions of all the “imperfect” knowledge we gather in the human world).

A more representative picture of current philosophical idealism can be seen in German idealists like Kant and Hegel, who are among the most influential. Kant posits that all we are capable of observing is the sense data we obtain through our experiences, and therefore, knowledge relies on a framework of universal, a priori truths in the human mind (like the logical implications of space and time) in order to understand our experiences. He divides these two understandings into two realms: the phenomenal (experiential) and the noumenal (transcendental).

Hegel accepts Kant’s belief that knowledge begins with our experiences, but he rejects the idea that we can know anything transcendental. He argues that we can only be skeptical of such things. Although, he does agree that our experiences are mediated through the mind.

Part of the reason I say what follows is because I know there will be no recourse from dead men: most of these epistemological debates are just an intellectual pissing match. Their differences about the nature of knowledge are essentially unessential, and only the things they agree upon, for the most part, are important. Realists and the various idealists all agree that we have experiences by way of the senses, that we analyze them with our brains, and by that general process we form “knowledge” (whatever its nature may be). Most of the disagreement results from a failure to clearly define knowledge and its characteristics. I suppose this makes me a semi-quietist.

Ultimately, generic epistemic realism and most forms of idealism are not actually in conflict. It may be that Kant’s framework of understanding is valid—that all we observe is sense data and that it is meaningful to (at least) distinguish between physical and nonphysical things. Perhaps Hegel is right that we should be skeptical about nonphysical things. In the end, it serves no purpose.

What idealists have mostly done is to bicker about the degree to which realism can(not) be proven. But they fail to deny (or sometimes even to observe) that realism must be assumed in the actions of every day life. Imagine living a life full of the worry that things will spontaneously phase out of existence of you pay them no attention. Along a similar line of thought, we make use of “transcendental” or metaphysical concepts all the time. We can disregard their idealistic origin should we so choose, but we must recognize their utility, for example, when we employ mathematics, geometry, and calculus to solve real-world problems.

The problem with this philosophical dichotomy is similar to its colloquial cousin. At most, “realist” and “idealist” could be used as labels for people who actually fit their narrow description. Almost all people, however, operate according to the simple, functional framework that I just explained and thus, would not be categorized as such. Even those who use them regularly typically concede that the dichotomy should be understood in terms of a scale, in which an individual may favor one disposition over the other.

This practice, even with the concession, is still dangerous because it pigeon-holes people into mental structures that limit their capabilities. If a person thinks he or she is predisposed to acting on ideals, then it will likely become a self-fulfilling prophecy, and that person may refuse to take certain realistic issues into account when it would not be difficult to otherwise. And the related outcome is true of people who think that they are “realists.”

The important thing for people to recognize is that there is no real utility to the mutual exclusivity between colloquial realism and idealism. They should strive to make use of both in concert, as our brains already do functionally according to a more accurate conceptual understanding.