General Thoughts on Epistemology IV: The Difficulty of “Faith”

A man praying at a Japanese Shintō shrine.

First, I don’t want to step on anybody’s beliefs, but, well, here we go. -Brian Regan

Given an understanding of the world based on the scientific method—what I call the “wager” process—there are some tricky things that happen when we try to reconcile the concept of “faith.” In what follows, I will explain why the term is unintelligible unless it is used to describe a situation in which one believes something while ignoring evidence to the contrary.

To reiterate what I have explained in previous entries, the “wager” theory claims that coming to a belief always involves a kind of wager. When a wager is made in gambling, risk is adopted that the wager will fail. The wager is then confirmed or denied when the cards (or what have you) are revealed. In the same way, holding a belief adopts the risk that it will be disproven, and it is confirmed or denied by physical or logical tests.

The Oxford Dictionary defines “faith” as “complete trust or confidence in someone or something.”  The Stanford Encyclopedia of Philosophy breaks it into three broad categories: affectivecognitive, and volitional. The affective component refers to the psychological state or attitude often denoted in the phrase “losing one’s faith.” The cognitive component gets at the epistemic nature of the concept, suggesting that it is an actual cognitive faculty by which we can come to “know” things. It is perhaps most accurately categorized as a kind of belief. The volitional component adds the notion that one can choose whether or not to have “faith”—to accept this special kind of belief. Most of what I will address here are the linguistic aspects of the cognitive component.

There is a basic definition given by the Hebrew bible that has a number of translations. The English Standard version is as follows: “Now faith is the assurance of things hoped for, the conviction of things not seen.” (Hebrews 11.1) Within every translation, two recurring elements of “faith” that appear to be essential components are “hope” and a sense of certainty that goes beyond mere belief. First, combining “hope” with “certainty” does not appear to give us any semantic tool that we don’t already have or refer to some mental state that we cannot pinpoint otherwise. Moreover, the two concepts appear to be at odds. Saying that I hope x will occur usually requires that I am uncertain that x will occur. It is often a conscious hedging away from certainty. As aspects of “faith,” they also run into issues on their own.

“Hope” is primarily a stronger form of “desire”. When we hope for something to be the case, we desire for it to happen. If “faith” is another type of desire—a stronger version of “hope” perhaps—then it wouldn’t serve any epistemic purpose whatsoever. My desire for something to happen or be true has nothing at all to do with attempting to discover the actual outcome.

On the other hand, “certainty” does imply an epistemic relation. Use seems to vacillate between having absolute conviction and having a mere strong conviction. “Faith” does appear to have an absolute quality about it, since having only a strong conviction doesn’t quite cut it as a substitute for having faith in most circles. If “faith” is more of an absolute conviction, however, then it begs a series of questions: By what standard is it absolute? And is it ever logically permissible to have such a conviction? And if/when it is permissible, is “faith” a reasonable vehicle to get there?

I haven’t yet seen an acceptable set of answers that suggests “faith” is a helpful and coherent tool in epistemology. Even if one can muster up an adequate description and justification of “absolute” conviction, we are still left with showing how “faith” works and making the case that it is something reasonable to use.

The problem is that there really is no good explanation for how it works. “Faith” is considered quite mysterious, intentionally so. And because reasonable use is contingent upon there being at least some tenable understanding of its function, the chances that faith and reason coincide are slim at best.

There are some attempts to do this, however. One way would be to identify faith as a component of every belief and that it somehow accounts for the gap between our natural state of ignorance and the actions we take. According to this model, every action we take requires an extra push to get us there from our beliefs. But “faith” doesn’t appear to provide any explanatory power here. And once we add the parts we need to describe the basic epistemic process—placing proposition-based wagers and using actions and “intuition pumps” as tests to confirm or deny them—it becomes unnecessary to add any more components.

We could attempt to identify the concept as the same exact thing as making a wager, but for “faith” proponents, this would probably be undesirable, for the element of certainty would be lost in this case. A bet (much like scientific theory) is something that holds strongly to the assumption that its failure is within the realm of possibility. Unless I am mistaken, it is clearly the opposite for matters of faith.

Another attempt would be to note that “faith” is more applicable when deciding whether or not to trust another human being, the thought being that we are so complex and unpredictable, that the only viable option is to take some “leap.” If so, then it would imply that faith and rationality are quite separate, since the assumption is that we cannot predict what will happen with reason.

However, the “wager” theory still acts as an intelligible substitute for this even if we cannot reasonably “predict” what will happen. One can effectively bet on what someone will do and hope that they do it. Nothing more is needed. To use “faith” as a replacement for these two operations still seems to fly in the face of the  certainty requirement. To tack it on as an extra piece of the puzzle seems to either be unintelligible or create a contradiction (being certain and uncertain simultaneously).

There is still one definition for “faith” that fits neatly into this narrative, and that is “to believe a proposition despite evident contradictions.” In this case, “faith” would not be something that is part of the proper epistemic process. It would describe a decisively irrational function by which one may believe certain propositions to be true while ignoring conflicting test results for reasons external to the epistemic process. Although employing “faith” could not be rational on this account, it may still have some positive psychological effects for various people. And it may even bring more happiness to those lives than “proper epistemology” would. Even so, I would not subscribe to it. But that’s just me.

Emotional Control Through Rational Thought (Learning How to be a Robot)

Contrary to what may be inferred by the title of this post, I do not think an individual can immediately decide to feel a certain way about such and such in opposition to an initial feeling. Even if it is possible, I do not think it is something easily accomplished. I do think, however, that an individual can condition oneself to react emotionally in one way or another over time.

One way to think about this is to examine Aristotle’s view, in which he divides the soul into three categories. The first amounts to what plants are capable of, basic nourishment and reproduction. The second level is that of animals who have the power of locomotion and perception. The third is the human level, which introduces the intellect (reason, rationality, or what have you). He uses this context in order to explain how one should live a eudiamonic (or the best kind of) life. His belief is that it is virtuous to utilize one’s rational capabilities to the fullest, and at the same time, one must exhibit self-control when dealing with the lower-level functions of life like appetite and emotion.

This may not be the most detailed or accurate way to categorize life considering our modern-day understanding, but there are a few reasoned observations that suggest that Aristotle is on to something: 1) The human capacity for rational thought, or something similar, is probably the essential characteristic that makes humans different from other life-forms on this planet. 2) The use of logic through our rational thought allows us to come to accurate conclusions about the world around us. 3) Rational conclusions can be overturned by emotional desires and vice versa. 4) Humans have the capability to change how rational thought and emotion are involved in their thought processes.

IF all this is pretty much true; and IF humankind is the most advanced form of life in existence; and IF there is an aristotelian “eudiamonic” life to be had, then MAYBE we should all aspire to become robots. These are some big “if’s” of course, hence the use of all caps… but no, I don’t actually advocate that we all aspire to become robots (right now anyway). Why? Human functioning is actually way more complex than what any robot can do. It is complex enough that we cannot yet replicate it by artificial means. I would advocate that people spend more time on “logic-based thought” than “emotional thought,” however. Why? Because I think it does more good for the world.

Whatever degree of utility that emotion plays in human thought processes, there is no denying that it takes relatively little time for most people to have emotional thoughts. Emotions are reactionary by nature. They are an automatic response that our bodies have to certain stimuli. We typically have very little control over these reactions, as they are hard-wired into our brains. Often they are explained as evolutionary survival mechanisms and thought to rise primarily from the limbic system. They explicitly fall into the category of non-rational functioning.

Rationality, on the other hand, is characterized by conscious and deliberate thought processes. To reason about something is considered an exercise in human agency. We are doing it on purpose, and we have control. Its function is essentially to discover truth by logically analyzing our observations. Processes in this category like differentiation and determining causal relations occur in the frontal lobe. I am of the impression that an individual can use rational processes like these to alter emotional processes.

Because emotions are closely tied to memory via the limbic system, I think the first step toward effective emotional control is to recognize the causal patterns of behavior. It would be prudent to analyze the typical triggers that cause associated emotional memories to fire. The goal should be to pinpoint the exact underlying causes that elicit the feeling. Sometimes it can be difficult when they are suppressed, but this is what your frontal lobe is there for. Taking the time to consciously face some of these issues might also require courage, but I don’t know how to help people with courage. Just don’t be a weenie I guess.

The second step would be to learn how to counteract the emotional reaction brought on by the trigger. There are many ways to do this, but I strongly advise against ignoring the emotion if your goal is long-term control. The objective of this step is to create emotional memories that override and replace the current ones. This can be done through introspection, external exposure, or a combination of the two. For example, suppose that I fear speaking in public. One thing I can do is to expose myself to situations in which there is more pressure to speak, like taking a speech class. Perhaps I can create a parallel scenario in which I am speaking in front of friends as if I were speaking in public. These are very common remedies to a common problem.

One uncommon method, though, is to use introspection. A solution can be found through creating a new perspective for oneself by thinking about the different possible outcomes. The practice could involve imagining worst case scenarios — those which would be most feared — and reconstructing the feeling in one’s mind. Doing this regularly may “wear the feeling out,” and the individual can better accept the emotion, making its effect negligible. Another option is to contrast the trigger situations with other situations that are far worse, creating a logical connection that will eliminate the reaction. Eventually it is possible for the subject to adopt the perspective of the indifferent observer: “So what?”

There isn’t really a third step.

If there were though, it would probably be to practice doing this until you become a really well-adjusted person.

…Or if your dream is to become a robot, then have at it.

Optimus Prime

Optimus Prime (Photo credit: Devin.M.Hunt)

Moral Sentimentalism and Moral Rationalism

Moral Sentimentalism and Moral Rationalism are two epistemological theories of morality—how we know what is right and wrong. Sentimentalists like Francis Hutcheson, David Hume, and Adam Smith have argued that a knowledge of morality arises from our senses. This has been described as an emotional basis and similar to the way we understand beauty. Rationalists like Immanuel Kant and Samuel Clarke have argued that we gain knowledge of morality from rational thought. In this view, the way we understand morality would be similar to the way we understand mathematics. Although this is a massive subject, I will do my best to reduce it to the essentials in order to explain why this is a false dichotomy and how we can better understand what happens when we make moral judgements.

Math is beautiful

Math is beautiful (Photo credit: quinn.anya)

Prof. Michael B. Gill from the University of Arizona tells us that the two positions, taken as a whole, are incompatible. The standard rationalist view holds that moral truths are necessary truths. They must be true in all possible worlds (alternate realities) in which they exist like “2+2=4.” If so, then judgements of morality are nothing like aesthetic judgements because we can imagine possible worlds in which one thing is beautiful and other possible worlds in which it is not. Conversely, sentimentalists hold that believing something to be beautiful and having a favorable feeling towards it are identical (or at least necessarily connected). In the same way, holding a moral belief toward a given action would be identical to having some feeling regarding that action. If this is the case, then there can be no analogy between morality and mathematics because math doesn’t address how we feel.

Gill rejects the idea that Sentimentalism and Rationalism are mutually-exclusive. I agree. I think it is primarily because of a failure in the discussion to connect on an epistemological level. There should be a number of premises that must be examined about how we know things in-general before we talk about morality. A theory that says we can know nothing through sentiment would eliminate sentimentalism completely, while a theory that says we do not use rational thought in obtaining knowledge would eliminate moral rationalism. Instead of “knowledge,” which serves to confuse the discussion, I think it is more useful to use “judgement.” It is much more intricate than this, but for my purposes, I will have to couch most of the epistemological discourse.

I think that both camps attempt to address different aspects of morality that are explained more clearly in a psychological context. Jonathan Haidt talks about this in the first part of his book, Why Good People are Divided by Politics and Religion. His belief is that moral “intuitions” come first and “strategic reasoning” comes second. It is usually the case that people have an initial unconscious reaction to a moral situation first, and then they rationalize it.

Does this mean that emotion serves as the basis for moral judgement? Prof. Jesse Prinz from the University of North Carolina at Chapel Hill thinks so. In fact, he argues, as Hume did, that a judgement that something is wrong is the same thing as having a negative sentiment towards it. He goes on to explain how emotion is both necessary and sufficient in order to make a moral judgement. There are studies that show certain areas of the brain which indicate emotion light up when people make moral judgements. These studies reveal that different emotions external to the moral dilemma affect the way we make judgements. The data also suggests that there is a correlation between a lack of emotion and an inability to draw a distinction between morality and mere convention in psychopathic subjects.

A joint study involving scholars from Harvard, Tufts, and UMBC remarks that the data is simply not enough to conclude that emotion is necessary and sufficient in order to make a moral judgement. More specifically, it does not provide a precise enough understanding of the role that emotion plays in judgement. Neuroimaging data only shows correlation between emotion and moral judgement but not causation. The effect of unrelated emotional inputs is not just limited to moral judgement. The research on psychopathic behavior shows in actuality that many still make the morality/conventionality distinction, only less often than normal subjects, but certainly not enough to confirm Prinz’s conclusion.

If one goal of moral judgement is to determine what is true, shouldn’t there be a key role for reason? The scholars from the joint study point towards an unconscious process that includes “causal-intentional representations,” which I take to be a form of reasoning. After all, a moral judgement is not meant to be a subjective statement. It is a statement that judges how things ought to be, suggesting that there should be a correct and objective answer. So looking at reason as opposed to “emotion” might not be the best way to describe what is happening.

Haidt says that there is a divide between two main types of cognition regarding morality: intuition (in place of emotion) and reasoning. He believes that intuitions come first and reasoning second, so he draws from this that Hume was right that passions (intuitions) “trump” reason. Emotions to him are just another form of cognition—information processing, and they should be categorized in a somewhat different way than they have been before. He also retains the idea that conscious reasoning is still an important aspect of moral judgment. Ultimately, I think we would both agree that the standard sentimentalist/rationalist dichotomy is faulty.

If Hume and Haidt are merely pointing out what happens—that people usually intuit first—then the claim is unsurprising and uncontroversial. If the claim is about what is most important, however, his conclusion about Hume is a bit odd. The problem, among other things, would be that it wouldn’t follow that Hume is correct. Just because people intuit first and reason consciously afterward does not mean that intuitions are necessarily more important—it only means that they happen first. From this interpretation, the two don’t appear to give any credence to the idea that perhaps what is most important is what is most effective. It also leads me to believe that Haidt would have to assume intuition and judgment are the same thing (what Hume argues) and disregard the notion that intuition could be connected to reason.

If this is not Haidt’s intention, we nevertheless can dig into the same discussion about how to address and think about morality. We may not immediately be thinking deeply in response to moral stimuli, but individuals can certainly change their habits in how they react over time by thinking consciously about them. I am convinced that if we can change what we think is moral, then we have some degree of choice that affects our “intuitions.” We are now faced, ironically, with a somewhat moral question about morality: “Should we try to rationalize morality as much as possible or just go with whatever we feel like?” Beginning with the next post, I will address questions like this in a series called The Morality of Moral Judgement.

Examining Objectivist Government

What is this “government” thing Ayn?

A government is an institution that holds the exclusive power to enforce certain rules of social conduct in a given geographical area.”

A government is the means of placing the retaliatory use of physical force under objective controli.e., under objectively defined laws.”

Makes sense. From whence is that use of retaliatory physical force derived?

“The necessary consequence of man’s right to life is his right to self-defense. In a civilized society, force may be used only in retaliation and only against those who initiate its use.”

Why wouldn’t an anarchist society work, say, a pacifist one?

“If some “pacifist” society renounced the retaliatory use of force, it would be left helplessly at the mercy of the first thug who decided to be immoral. Such a society would achieve the opposite of its intention: instead of abolishing evil, it would encourage and reward it.”

OK, but what if there were no government, and people could freely defend themselves?

“If a society provided no organized protection against force, it would compel every citizen to go about armed, to turn his home into a fortress, to shoot any strangers approaching his door—or to join a protective gang of citizens who would fight other gangs, formed for the same purpose, and thus bring about the degeneration of that society into the chaos of gang-rule, i.e., rule by brute force, into perpetual tribal warfare of prehistoric savages.”

“[Therefore,] the use of physical force—even its retaliatory use—cannot be left at the discretion of individual citizens.”

Wait… What if everyone were fully rational and faultlessly moral?

“…the possibility of human immorality is not the only objection to anarchy: even a society whose every member were fully rational and faultlessly moral, could not function in a state of anarchy: it is the need of objective laws and of an arbiter for honest disagreements among men that necessitates the establishment of a government.”

Why would any given human population be generally rational/capable enough to enforce the Law as members of a government, but not generally rational/capable enough to enforce the Law as free individuals?

Thus far, I haven’t been able to find an adequate answer to this question. Don’t get me wrong, I really like the idea of a minarchist society. Heck, I like the idea of an anarcho-capitalist society. I do think that either would be difficult to achieve given the vast history of humankind, but I would gladly live in (and even strive for) either society given that it could exist and run effectively.

Even so, in the next post I will deliver a critique of Ayn Rand’s perspective on government. ->

(All quotes from “The Nature of Government“)

Pascal’s Wager and the Giant Meatball

English: raw meatball, before frying it עברית:...

(Photo credit: Wikipedia)

Blaise Pascal said in the seventeenth century that we ought to choose to believe in God because an assessment of the consequences tells us that we have more to gain by doing so. If God does exist, we gain infinitely by believing and we lose infinitely by not believing. If God does not exist, we (presumably) gain finitely by not believing and lose finitely by believing. Because the infinite consequence of our decision in the “God exists” world will always outweigh any finite consequence of our decision in the “God doesn’t exist” world, we stand more to gain by choosing to believe.

Pascal builds his argument upon the assumption that there is no way to reasonably determine the existence of God. Therefore, we must “wager” by weighing the possible outcomes. His argument also presumes, however, that it is reasonable for a belief to be determined by weighing the consequences—that belief is no different from any other action in this regard. My position is this: to hold a belief on any ground other than a justification of its truth-value is either irrational or non-rational. If it is decidedly impossible to justify the truth-value of a proposition, then it is irrational to believe the proposition.

Let’s apply Pascal’s Wager to a different scenario. Suppose I believe that there is a giant invisible space-meatball speeding towards Earth that will knock it into the sun, and the only way to survive is to steal a local space ship from the shipyard and travel to the moon colony (this is a sci-fi example). I tell my friend this, and he is extremely skeptical. He says that I can’t prove it is true. I respond that he cannot prove to me that it is false. We mutually agree that we will never reasonably determine the truth-value of my claim.

But then I tell him we stand more to gain by choosing to believe that the space-meatball exists. If it does exist, we will gain our lives by believing, but lose our lives by not believing. If it does not exist, we will gain or lose little relative to our lives. My friend reluctantly agrees, and we spend the rest of our lives on the moon as ship-stealing fugitives, anxiously awaiting a giant cataclysmic meatball that never comes.

It is clear that using this wager-mode of thinking without giving any attention to the truth-value of one’s beliefs could be fairly disastrous if it results in dangerous behavior. The difference between this example and the example with God, is that if I can remain skeptical about the meatball and travel to the moon, I will still accomplish my goal if the meatball exists. However, I cannot merely act in life as if God exists and make it through the pearly gates. I must legitimately believe (at least in the Christian tradition). This means that I must believe despite an ability to falsify or prove the claim in order to accomplish my goal of not burning in hellfire for eternity.

Pascal wants to treat belief in God more like an action than a belief. It isn’t a physical action though. When it comes to justifying physical action, one may wager in terms of desired outcome. In complex cases, one must weigh possible outcomes in conjunction with risk. If I want to buy a clown nose at Walmart, I must think about the possibility that they don’t sell them and the risk that I will arrive to find that they have none. If I call, and the person says they have them in stock, I reason that I have reduced the risk that I won’t be able to purchase one at the store. The decision of what action to take is dependent both upon what I predict to be true and what I want to happen.

When it comes to belief, the thought process does not include desired outcome. Belief, by definition, is the mental state we have when we regard a proposition as true. So it must be grounded in a justification of whether or not a proposition correctly describes reality in order to fulfill its function. Therefore, the mental process leading up to belief is the means by which we figure out what is real and what is not. If all this is accurate, I think it follows that it makes no sense to believe something without having at least some justification for it. And I am fairly certain that to know what belief rationally entails while disregarding it purposefully is downright irrational.

Since belief in God is required to accomplish the goal of not suffering at the hands of demons forever, people who do not believe are thrust into an odd position. In order for an agnostic to believe in God, one must disregard the entire function of belief. This is what Pascal asks us to do. An agnostic must therefore decide between a rational decision to not hold a belief and a decision to cast rationality aside in favor of soul-insurance.  Many would call this faith and would agree that believing in God necessitates that one disregard rational thought in this case. Some have no problem with this. I used to have trouble understanding how it is possible, but there is no doubting the evidence that it is possible for human beings to hold beliefs that contradict other beliefs. It happens all the time.